Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7,951
| 11,137,559,603
|
IssuesEvent
|
2019-12-20 19:41:48
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Application review page: work experience should in order selected
|
Apply Process Approved Requirements Ready State Dept.
|
Who: applicants
What: reviewing their work experience should display in order selected
Why: to be in line with the resume and USAJOBS
Acceptance Criteria:
Student application - review page: When a student applies, the work experience on the experience page should display in the order selected in Open Opps.
- the default order will be the sorted order from USAJOBS however the order can be changed in Open Opps. Whatever is saved in Open Opps is the order the review page should display
- the order saved should also be what is sent to ATP
|
1.0
|
Application review page: work experience should in order selected - Who: applicants
What: reviewing their work experience should display in order selected
Why: to be in line with the resume and USAJOBS
Acceptance Criteria:
Student application - review page: When a student applies, the work experience on the experience page should display in the order selected in Open Opps.
- the default order will be the sorted order from USAJOBS however the order can be changed in Open Opps. Whatever is saved in Open Opps is the order the review page should display
- the order saved should also be what is sent to ATP
|
process
|
application review page work experience should in order selected who applicants what reviewing their work experience should display in order selected why to be in line with the resume and usajobs acceptance criteria student application review page when a student applies the work experience on the experience page should display in the order selected in open opps the default order will be the sorted order from usajobs however the order can be changed in open opps whatever is saved in open opps is the order the review page should display the order saved should also be what is sent to atp
| 1
|
506,702
| 14,671,508,155
|
IssuesEvent
|
2020-12-30 08:21:06
|
TeamSTEP/project-witch-one
|
https://api.github.com/repos/TeamSTEP/project-witch-one
|
opened
|
[Task] Setup map editing environment
|
High Priority Hoon Kim add feature
|
# Task Summary
The team has decided to use Tiled as the main map editing tool and completely separate the game scene logic from the level design. This means that the Tiled map editor must be configured to allow the map designer to easily work with the tool.
## Subtasks
This ticket will be considered as finished when the following items are fulfilled:
- [ ] add object types
- [ ] configure terrain brushes for all the tilesets excluding ones we can't*
- [ ] add object templates
- [ ] set up automapping rules for all the cliff faces
- [ ] write documentation about using Tiled for this project
(*The house tiles (both interiors and the exteriors) are only a single tile high while the player character sprite is 2 tiles high. So we should not touch the house tileset until this problem is solved)
## Difficulty
3/10
## Estimated Implementation Time
> Note: the task start date is considered to be the day the ticket was opened or the next day when the dependent task was closed
- **Optimistic**: 1 week
- **Normal**: 10 days
- **Pessimistic**: 2 weeks
|
1.0
|
[Task] Setup map editing environment - # Task Summary
The team has decided to use Tiled as the main map editing tool and completely separate the game scene logic from the level design. This means that the Tiled map editor must be configured to allow the map designer to easily work with the tool.
## Subtasks
This ticket will be considered as finished when the following items are fulfilled:
- [ ] add object types
- [ ] configure terrain brushes for all the tilesets excluding ones we can't*
- [ ] add object templates
- [ ] set up automapping rules for all the cliff faces
- [ ] write documentation about using Tiled for this project
(*The house tiles (both interiors and the exteriors) are only a single tile high while the player character sprite is 2 tiles high. So we should not touch the house tileset until this problem is solved)
## Difficulty
3/10
## Estimated Implementation Time
> Note: the task start date is considered to be the day the ticket was opened or the next day when the dependent task was closed
- **Optimistic**: 1 week
- **Normal**: 10 days
- **Pessimistic**: 2 weeks
|
non_process
|
setup map editing environment task summary the team has decided to use tiled as the main map editing tool and completely separate the game scene logic from the level design this means that the tiled map editor must be configured to allow the map designer to easily work with the tool subtasks this ticket will be considered as finished when the following items are fulfilled add object types configure terrain brushes for all the tilesets excluding ones we can t add object templates set up automapping rules for all the cliff faces write documentation about using tiled for this project the house tiles both interiors and the exteriors are only a single tile high while the player character sprite is tiles high so we should not touch the house tileset until this problem is solved difficulty estimated implementation time note the task start date is considered to be the day the ticket was opened or the next day when the dependent task was closed optimistic week normal days pessimistic weeks
| 0
|
3,109
| 2,607,984,253
|
IssuesEvent
|
2015-02-26 00:50:59
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
opened
|
Custom Abbreviations
|
auto-migrated Priority-Medium Type-Defect
|
```
I use the Komodo edit extension for zen coding. I was wondering would it be
possible to set custom abbreviations with a custom out put?
```
-----
Original issue reported on code.google.com by `jordan.r...@kasacapital.com` on 9 Aug 2011 at 8:10
|
1.0
|
Custom Abbreviations - ```
I use the Komodo edit extension for zen coding. I was wondering would it be
possible to set custom abbreviations with a custom out put?
```
-----
Original issue reported on code.google.com by `jordan.r...@kasacapital.com` on 9 Aug 2011 at 8:10
|
non_process
|
custom abbreviations i use the komodo edit extension for zen coding i was wondering would it be possible to set custom abbreviations with a custom out put original issue reported on code google com by jordan r kasacapital com on aug at
| 0
|
14,747
| 18,018,110,222
|
IssuesEvent
|
2021-09-16 15:56:32
|
googleapis/gapic-showcase
|
https://api.github.com/repos/googleapis/gapic-showcase
|
closed
|
Dependency Dashboard
|
priority: p2 type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/com_google_googleapis-digest -->chore(deps): update com_google_googleapis commit hash to 48d9fb8
- [ ] <!-- unschedule-branch=renovate/google.golang.org-genproto-digest -->fix(deps): update google.golang.org/genproto commit hash to 3192f97
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/google.golang.org-api-0.x -->[fix(deps): update module google.golang.org/api to v0.57.0](../pull/876)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/com_google_googleapis-digest -->chore(deps): update com_google_googleapis commit hash to 48d9fb8
- [ ] <!-- unschedule-branch=renovate/google.golang.org-genproto-digest -->fix(deps): update google.golang.org/genproto commit hash to 3192f97
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/google.golang.org-api-0.x -->[fix(deps): update module google.golang.org/api to v0.57.0](../pull/876)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now chore deps update com google googleapis commit hash to fix deps update google golang org genproto commit hash to open these updates have all been created already click a checkbox below to force a retry rebase of any pull check this box to trigger a request for renovate to run again on this repository
| 1
|
63,810
| 3,201,034,420
|
IssuesEvent
|
2015-10-02 02:06:03
|
cs2103aug2015-f10-4j/main
|
https://api.github.com/repos/cs2103aug2015-f10-4j/main
|
closed
|
A user should be able to duplicate recurring tasks/events with 1 command
|
component.parser priority.high type.story
|
so that the user does not have to re-enter all the details for recurring tasks/events
|
1.0
|
A user should be able to duplicate recurring tasks/events with 1 command - so that the user does not have to re-enter all the details for recurring tasks/events
|
non_process
|
a user should be able to duplicate recurring tasks events with command so that the user does not have to re enter all the details for recurring tasks events
| 0
|
19,674
| 26,031,336,144
|
IssuesEvent
|
2022-12-21 21:38:41
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Passing Between Stages Is Not correct
|
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
|
The example for passing a variable from a deployment task across stages is incorrect as is. However, even with the proper syntax it still does not work. Please update the example with a working way to pass vars between a deployment job and a stage
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#support-for-output-variables)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Passing Between Stages Is Not correct -
The example for passing a variable from a deployment task across stages is incorrect as is. However, even with the proper syntax it still does not work. Please update the example with a working way to pass vars between a deployment job and a stage
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#support-for-output-variables)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
passing between stages is not correct the example for passing a variable from a deployment task across stages is incorrect as is however even with the proper syntax it still does not work please update the example with a working way to pass vars between a deployment job and a stage document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
127,308
| 12,312,049,223
|
IssuesEvent
|
2020-05-12 13:23:17
|
HEPData/hepdata_lib
|
https://api.github.com/repos/HEPData/hepdata_lib
|
opened
|
readthedocs: Module index not available anymore
|
bug documentation
|
See https://hepdata-lib.readthedocs.io/en/latest/source/hepdata_lib.html
The module index does not build anymore
|
1.0
|
readthedocs: Module index not available anymore - See https://hepdata-lib.readthedocs.io/en/latest/source/hepdata_lib.html
The module index does not build anymore
|
non_process
|
readthedocs module index not available anymore see the module index does not build anymore
| 0
|
68,803
| 17,405,651,400
|
IssuesEvent
|
2021-08-03 05:17:07
|
aws/aws-sam-cli
|
https://api.github.com/repos/aws/aws-sam-cli
|
closed
|
How is the name of the image and tag generated?
|
area/build area/deploy type/question
|
### Description:
1. `sam build` generates an image that seems to be tagged with the name of the function. I would like to override this my choice. I tried setting `Properties.ImageUri` in `template.yaml`. I also used `--parameter-overrides ImageUri=myimage` in `sam build` command. But, in `.aws-sam/build/template.yaml` these changes are not taking effect.
2. `sam local invoke function -e events/event.json` seems to build yet another image with tag as `rapid-1.18.1`. What is this tag? How do I override it? Why is a new image generated at all?
3. `sam deploy --guided` allows me to provide the image repository URI. However, the image to be pushed uses a tag that is not recognizable. How do I override it? Why not use the one that was built and invoked locally as the default?
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Use any project created with `sam init` and package type as `Image`; the simpler the better. Run `sam build` and note the image in the `Successfully tagged` message.
2. Invoke the recently built image with `sam local invoke functionName -e events/event.json`. A new image is built.
3. Deploy with `sam deploy --guided`; the image to be pushed uses yet another tag name.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
### Expected result:
<!-- Describe what you expected.-->
1. Ability to provide image names.
2. Use same tag for local testing and deployment.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac
2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.18.1`
3. AWS region: `ap-south-1`
`Add --debug flag to any SAM CLI commands you are running`
|
1.0
|
How is the name of the image and tag generated? - ### Description:
1. `sam build` generates an image that seems to be tagged with the name of the function. I would like to override this my choice. I tried setting `Properties.ImageUri` in `template.yaml`. I also used `--parameter-overrides ImageUri=myimage` in `sam build` command. But, in `.aws-sam/build/template.yaml` these changes are not taking effect.
2. `sam local invoke function -e events/event.json` seems to build yet another image with tag as `rapid-1.18.1`. What is this tag? How do I override it? Why is a new image generated at all?
3. `sam deploy --guided` allows me to provide the image repository URI. However, the image to be pushed uses a tag that is not recognizable. How do I override it? Why not use the one that was built and invoked locally as the default?
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Use any project created with `sam init` and package type as `Image`; the simpler the better. Run `sam build` and note the image in the `Successfully tagged` message.
2. Invoke the recently built image with `sam local invoke functionName -e events/event.json`. A new image is built.
3. Deploy with `sam deploy --guided`; the image to be pushed uses yet another tag name.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
### Expected result:
<!-- Describe what you expected.-->
1. Ability to provide image names.
2. Use same tag for local testing and deployment.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac
2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.18.1`
3. AWS region: `ap-south-1`
`Add --debug flag to any SAM CLI commands you are running`
|
non_process
|
how is the name of the image and tag generated description sam build generates an image that seems to be tagged with the name of the function i would like to override this my choice i tried setting properties imageuri in template yaml i also used parameter overrides imageuri myimage in sam build command but in aws sam build template yaml these changes are not taking effect sam local invoke function e events event json seems to build yet another image with tag as rapid what is this tag how do i override it why is a new image generated at all sam deploy guided allows me to provide the image repository uri however the image to be pushed uses a tag that is not recognizable how do i override it why not use the one that was built and invoked locally as the default steps to reproduce use any project created with sam init and package type as image the simpler the better run sam build and note the image in the successfully tagged message invoke the recently built image with sam local invoke functionname e events event json a new image is built deploy with sam deploy guided the image to be pushed uses yet another tag name observed result expected result ability to provide image names use same tag for local testing and deployment additional environment details ex windows mac amazon linux etc os mac if using sam cli sam version sam cli version aws region ap south add debug flag to any sam cli commands you are running
| 0
|
11,740
| 14,581,817,109
|
IssuesEvent
|
2020-12-18 11:19:32
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Participant registry >Enrollment history > 'Yet to enroll' status is displaying
|
Bug P2 Participant datastore Process: Fixed Process: Tested QA Process: Tested dev
|
Steps
1. Add/Invite a Participant to a Closed study
2. Navigate to Participant details page and observe Enrollment history
AR : History is displaying with 'Yet to enroll'
ER : 'No records found' text should be displayed when there are no records present in Enrollment history

|
3.0
|
Participant registry >Enrollment history > 'Yet to enroll' status is displaying - Steps
1. Add/Invite a Participant to a Closed study
2. Navigate to Participant details page and observe Enrollment history
AR : History is displaying with 'Yet to enroll'
ER : 'No records found' text should be displayed when there are no records present in Enrollment history

|
process
|
participant registry enrollment history yet to enroll status is displaying steps add invite a participant to a closed study navigate to participant details page and observe enrollment history ar history is displaying with yet to enroll er no records found text should be displayed when there are no records present in enrollment history
| 1
|
15,494
| 19,703,230,306
|
IssuesEvent
|
2022-01-12 18:49:52
|
googleapis/cloud-profiler-nodejs
|
https://api.github.com/repos/googleapis/cloud-profiler-nodejs
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'profiler' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'profiler' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname profiler invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
966
| 3,422,948,921
|
IssuesEvent
|
2015-12-09 02:13:19
|
MaretEngineering/MROV
|
https://api.github.com/repos/MaretEngineering/MROV
|
closed
|
XBOX button toggle
|
Processing question
|
It currently uses a debouncing method which requires a wait of 175 ms every time you push the button. I'm wondering if there is a way which doesn't require a delay.
|
1.0
|
XBOX button toggle - It currently uses a debouncing method which requires a wait of 175 ms every time you push the button. I'm wondering if there is a way which doesn't require a delay.
|
process
|
xbox button toggle it currently uses a debouncing method which requires a wait of ms every time you push the button i m wondering if there is a way which doesn t require a delay
| 1
|
179,873
| 30,316,587,109
|
IssuesEvent
|
2023-07-10 15:58:25
|
zeitgeistpm/ui
|
https://api.github.com/repos/zeitgeistpm/ui
|
closed
|
Review and remove notifications where no long applicable.
|
design
|
The notifications appearing in the top right corner doesn't seem to be needed(at least in most cases) in the new designs.
**Task: review and remove notifications where they aren't needed.**
_Implement new designs for them when and if they are needed as the current design doesnt really fit imho._
For example when buying the modal shows a success step so they can be removed:

|
1.0
|
Review and remove notifications where no long applicable. - The notifications appearing in the top right corner doesn't seem to be needed(at least in most cases) in the new designs.
**Task: review and remove notifications where they aren't needed.**
_Implement new designs for them when and if they are needed as the current design doesnt really fit imho._
For example when buying the modal shows a success step so they can be removed:

|
non_process
|
review and remove notifications where no long applicable the notifications appearing in the top right corner doesn t seem to be needed at least in most cases in the new designs task review and remove notifications where they aren t needed implement new designs for them when and if they are needed as the current design doesnt really fit imho for example when buying the modal shows a success step so they can be removed
| 0
|
16,756
| 21,925,277,782
|
IssuesEvent
|
2022-05-23 02:59:10
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Support loading rules recursively from the rule path
|
issue-processing-state-06
|
**Is your feature request related to a problem? Please describe.**
In quark-engine/quark-rules#26, we plan to increase the visibility of the README.md by moving all rules into a folder. However, since Quark currently only searches rules right in the given path, this change will make it fail to find the default ruleset.
**Describe the solution you'd like**
I suggest adding the ability to load rules recursively from a path. In this way, Quark is compatible with both the old and new versions of the default rulesets.
|
1.0
|
Support loading rules recursively from the rule path - **Is your feature request related to a problem? Please describe.**
In quark-engine/quark-rules#26, we plan to increase the visibility of the README.md by moving all rules into a folder. However, since Quark currently only searches rules right in the given path, this change will make it fail to find the default ruleset.
**Describe the solution you'd like**
I suggest adding the ability to load rules recursively from a path. In this way, Quark is compatible with both the old and new versions of the default rulesets.
|
process
|
support loading rules recursively from the rule path is your feature request related to a problem please describe in quark engine quark rules we plan to increase the visibility of the readme md by moving all rules into a folder however since quark currently only searches rules right in the given path this change will make it fail to find the default ruleset describe the solution you d like i suggest adding the ability to load rules recursively from a path in this way quark is compatible with both the old and new versions of the default rulesets
| 1
|
139,690
| 20,926,173,926
|
IssuesEvent
|
2022-03-24 23:19:13
|
adobe/design-website
|
https://api.github.com/repos/adobe/design-website
|
opened
|
Review jobs posting formating for policy and About Adobe Design sections
|
polish design-question
|
Review and confirm that the sizing for the policy and About Adobe Design section have good readability. They currently feel a bit wide and difficult to take in.
Work with Jos on spacing.
<img width="838" alt="Screen Shot 2022-03-24 at 4 17 37 PM" src="https://user-images.githubusercontent.com/100241986/160025320-8cc774c1-79d4-416a-bf2e-554a58a51115.png">
|
1.0
|
Review jobs posting formating for policy and About Adobe Design sections - Review and confirm that the sizing for the policy and About Adobe Design section have good readability. They currently feel a bit wide and difficult to take in.
Work with Jos on spacing.
<img width="838" alt="Screen Shot 2022-03-24 at 4 17 37 PM" src="https://user-images.githubusercontent.com/100241986/160025320-8cc774c1-79d4-416a-bf2e-554a58a51115.png">
|
non_process
|
review jobs posting formating for policy and about adobe design sections review and confirm that the sizing for the policy and about adobe design section have good readability they currently feel a bit wide and difficult to take in work with jos on spacing img width alt screen shot at pm src
| 0
|
20,253
| 26,871,598,250
|
IssuesEvent
|
2023-02-04 14:36:41
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
Introduce `assertEquals` for `int`
|
Priority-Low Good First Issue! Stale Test process
|
Currently `assertEquals` only consider Long and Object. In Java, Int were converted to Long, but in Kotlin they are converted to Object. Which force conversion to explicitly write `toLong` to get the proper overloaded method to be invoked.
I suggest adding in AnkiDroid/src/test/java/com/ichi2/testutils/AnkiAssert.java the following methods
```java
public static void assertEquals(int left, int right) {
org.junit.Assert.assertEquals((long) left, (long) right);
}
public static void assertEquals(String message, int left, int right) {
org.junit.Assert.assertEquals(message, (long) left, (long) right);
}
```
and going through the 121 results of the search of `assertEquals\(.*long` in the codebase to uses this new method instead of the previous, and simplify the equality assertion slightly
|
1.0
|
Introduce `assertEquals` for `int` - Currently `assertEquals` only consider Long and Object. In Java, Int were converted to Long, but in Kotlin they are converted to Object. Which force conversion to explicitly write `toLong` to get the proper overloaded method to be invoked.
I suggest adding in AnkiDroid/src/test/java/com/ichi2/testutils/AnkiAssert.java the following methods
```java
public static void assertEquals(int left, int right) {
org.junit.Assert.assertEquals((long) left, (long) right);
}
public static void assertEquals(String message, int left, int right) {
org.junit.Assert.assertEquals(message, (long) left, (long) right);
}
```
and going through the 121 results of the search of `assertEquals\(.*long` in the codebase to uses this new method instead of the previous, and simplify the equality assertion slightly
|
process
|
introduce assertequals for int currently assertequals only consider long and object in java int were converted to long but in kotlin they are converted to object which force conversion to explicitly write tolong to get the proper overloaded method to be invoked i suggest adding in ankidroid src test java com testutils ankiassert java the following methods java public static void assertequals int left int right org junit assert assertequals long left long right public static void assertequals string message int left int right org junit assert assertequals message long left long right and going through the results of the search of assertequals long in the codebase to uses this new method instead of the previous and simplify the equality assertion slightly
| 1
|
84,539
| 10,544,983,200
|
IssuesEvent
|
2019-10-02 18:07:54
|
aragon/aragon-apps
|
https://api.github.com/repos/aragon/aragon-apps
|
closed
|
Determine what notifications we want
|
app: finance app: token manager app: vault app: voting component: frontend design: request enhancement
|
For the notifications feature to be implemented, we should decide on what notifications we want from each app.
Furthermore, there are technically two types of notifications: "global" and "direct", where global notifications are for all users of a DAO and direct notification is for a specific user or a specific set of users.
We should think about signal/noise ratio when deciding.
|
1.0
|
Determine what notifications we want - For the notifications feature to be implemented, we should decide on what notifications we want from each app.
Furthermore, there are technically two types of notifications: "global" and "direct", where global notifications are for all users of a DAO and direct notification is for a specific user or a specific set of users.
We should think about signal/noise ratio when deciding.
|
non_process
|
determine what notifications we want for the notifications feature to be implemented we should decide on what notifications we want from each app furthermore there are technically two types of notifications global and direct where global notifications are for all users of a dao and direct notification is for a specific user or a specific set of users we should think about signal noise ratio when deciding
| 0
|
22,733
| 32,054,791,764
|
IssuesEvent
|
2023-09-24 00:44:17
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
hpcflow-new2 0.2.0a108 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a108",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a108/hpcflow/sdk/submission/jobscript.py:990",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a108/hpcflow/sdk/helper/helper.py:111",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp7f_yw727/hpcflow-new2"
}
}```
|
1.0
|
hpcflow-new2 0.2.0a108 has 2 GuardDog issues - https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a108",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a108/hpcflow/sdk/submission/jobscript.py:990",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a108/hpcflow/sdk/helper/helper.py:111",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp7f_yw727/hpcflow-new2"
}
}```
|
process
|
hpcflow has guarddog issues dependency hpcflow version result issues errors results exec location hpcflow hpcflow sdk submission jobscript py code init proc subprocess popen n args args n cwd str self workflow path n creationflags subprocess create no window n message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n silent process execution location hpcflow hpcflow sdk helper helper py code proc subprocess popen n args args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n kwargs n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp hpcflow
| 1
|
294,805
| 25,406,757,221
|
IssuesEvent
|
2022-11-22 15:49:18
|
TDAmeritrade/stumpy
|
https://api.github.com/repos/TDAmeritrade/stumpy
|
closed
|
Split Coverage Tests for Github Actions
|
testing
|
As our test suite gets longer, the `coverage` tests, which are executed in pure Python, will continue to need more time to complete. GIven that Github Actions has a job time limit, it may be beneficial to further split up the `coverage` testing time into two separate jobs (instead of all in one). However, this will require storing the `.coverage` file between jobs. This can be accomplished by using "Uploading/Downloading Artifacts" (see [this article](https://levelup.gitconnected.com/github-actions-how-to-share-data-between-jobs-fc1547defc3e)):
To upload a file:
```
steps:
- uses: actions/checkout@v2
- run: mkdir -p path/to/artifact
- run: echo hello > path/to/artifact/world.txt
- uses: actions/upload-artifact@v2
with:
name: my-artifact
path: path/to/artifact/world.txt
```
In our case, it might look something like:
```
coverage-testing:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ['3.7', '3.8', '3.9', '3.10']
steps:
- uses: actions/checkout@v2
- name: Set Up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Display Python Version
run: python -c "import sys; print(sys.version)"
shell: bash
- name: Install STUMPY And Other Dependencies
run: pip install --editable .[ci]
shell: bash
- name: Run Black
run: black --check --diff ./
shell: bash
- name: Run Flake8
run: flake8 ./
shell: bash
- name: Run Coverage Tests
run: ./test.sh coverage
shell: bash
- uses: actions/upload-artifact@v2
with:
name: coverage_results
path: ./.coverage
```
Then, for the second half of the job you can retrieve the file:
```
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: my-artifact
```
or in our case:
```
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: coverage_results
```
|
1.0
|
Split Coverage Tests for Github Actions - As our test suite gets longer, the `coverage` tests, which are executed in pure Python, will continue to need more time to complete. GIven that Github Actions has a job time limit, it may be beneficial to further split up the `coverage` testing time into two separate jobs (instead of all in one). However, this will require storing the `.coverage` file between jobs. This can be accomplished by using "Uploading/Downloading Artifacts" (see [this article](https://levelup.gitconnected.com/github-actions-how-to-share-data-between-jobs-fc1547defc3e)):
To upload a file:
```
steps:
- uses: actions/checkout@v2
- run: mkdir -p path/to/artifact
- run: echo hello > path/to/artifact/world.txt
- uses: actions/upload-artifact@v2
with:
name: my-artifact
path: path/to/artifact/world.txt
```
In our case, it might look something like:
```
coverage-testing:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ['3.7', '3.8', '3.9', '3.10']
steps:
- uses: actions/checkout@v2
- name: Set Up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Display Python Version
run: python -c "import sys; print(sys.version)"
shell: bash
- name: Install STUMPY And Other Dependencies
run: pip install --editable .[ci]
shell: bash
- name: Run Black
run: black --check --diff ./
shell: bash
- name: Run Flake8
run: flake8 ./
shell: bash
- name: Run Coverage Tests
run: ./test.sh coverage
shell: bash
- uses: actions/upload-artifact@v2
with:
name: coverage_results
path: ./.coverage
```
Then, for the second half of the job you can retrieve the file:
```
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: my-artifact
```
or in our case:
```
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: coverage_results
```
|
non_process
|
split coverage tests for github actions as our test suite gets longer the coverage tests which are executed in pure python will continue to need more time to complete given that github actions has a job time limit it may be beneficial to further split up the coverage testing time into two separate jobs instead of all in one however this will require storing the coverage file between jobs this can be accomplished by using uploading downloading artifacts see to upload a file steps uses actions checkout run mkdir p path to artifact run echo hello path to artifact world txt uses actions upload artifact with name my artifact path path to artifact world txt in our case it might look something like coverage testing runs on matrix os strategy matrix os python version steps uses actions checkout name set up python uses actions setup python with python version matrix python version name display python version run python c import sys print sys version shell bash name install stumpy and other dependencies run pip install editable shell bash name run black run black check diff shell bash name run run shell bash name run coverage tests run test sh coverage shell bash uses actions upload artifact with name coverage results path coverage then for the second half of the job you can retrieve the file steps uses actions checkout uses actions download artifact with name my artifact or in our case steps uses actions checkout uses actions download artifact with name coverage results
| 0
|
13,466
| 15,951,447,774
|
IssuesEvent
|
2021-04-15 09:51:22
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
Attached msg-Files break filenames in Zammad (combination Exchange and Outlook)
|
bug mail processing prioritised by payment verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8 & develop
* Installation method (source, package, ..): rpm & Source
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Outlook-Versions being affected for sure: Outlook 2013 & 2016 connected with Exchange
* Ticket-ID: #1034168, #1077162
### Expected behavior:
* When forwarding Mails with attachments to Zammad, Zammad will import the attachment exact like the original been send. Meaning, it will keep file extension and file name, if applicable.
### Actual behavior:
* In some special cases (Outlook + Exchange), the Mail with MSG-Attachment will contain a unique-ID (Content-ID). This causes Zammad to show the Content-ID instead of the file-name and without file extension.
### Steps to reproduce the behavior:
* Create a E-Mail with an attached .eml file
* Add a line with `Content-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>` within the Part where the message attachment is referenced
* Import this file into Zammad
For reference, here's what the attachment part needs to look like:
```
------=_NextPart_000_0067_01D4A43B.1C2130F0
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment
Content-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>
Received: from mx2.zammad.com
by mx2.zammad.com with LMTP
id yOHIAN1gL1y4UwAAABwAvw
(envelope-from <alias@some.tld>)
for <mh@zammad.com>; Fri, 04 Jan 2019 13:34:21 +0000
Received: from mikasa.some.tld (mikasa.some.tld [x.x.x.x])
by mx2.zammad.com (Postfix) with ESMTPS id B89E9A02C6
for <mh@zammad.com>; Fri, 4 Jan 2019 13:34:20 +0000 (UTC)
Received: from EHLO mikasa.some.tld (mikasa.some.tld [x.x.x.x])
by mikasa.some.tld with ESMTPSA
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128)
; Fri, 4 Jan 2019 14:34:17 +0100
From: "Marcel Herrguth" <alias@some.tld>
To: <alias@another.tld>
Subject: Test Attachment Text
Date: Fri, 4 Jan 2019 14:34:17 +0100
Organization: The home of Anime
Message-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary="----=_NextPart_000_005E_01D4A43B.1C1DD590"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFpQ8vnu4DrfNDtB+p/XwBv/6/XNg==
This is a multipart message in MIME format.
------=_NextPart_000_005E_01D4A43B.1C1DD590
Content-Type: text/plain;
boundary="=_d1b5b47c2bb88af990b40cf2152b3d67";
charset="us-ascii"
Content-Transfer-Encoding: 7bit
680gh4r6s8t
04 h
640trs
684 h06
8rs4t0
640
--
```
Look in Zammad:

Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Attached msg-Files break filenames in Zammad (combination Exchange and Outlook) - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8 & develop
* Installation method (source, package, ..): rpm & Source
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Outlook-Versions being affected for sure: Outlook 2013 & 2016 connected with Exchange
* Ticket-ID: #1034168, #1077162
### Expected behavior:
* When forwarding Mails with attachments to Zammad, Zammad will import the attachment exact like the original been send. Meaning, it will keep file extension and file name, if applicable.
### Actual behavior:
* In some special cases (Outlook + Exchange), the Mail with MSG-Attachment will contain a unique-ID (Content-ID). This causes Zammad to show the Content-ID instead of the file-name and without file extension.
### Steps to reproduce the behavior:
* Create a E-Mail with an attached .eml file
* Add a line with `Content-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>` within the Part where the message attachment is referenced
* Import this file into Zammad
For reference, here's what the attachment part needs to look like:
```
------=_NextPart_000_0067_01D4A43B.1C2130F0
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment
Content-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>
Received: from mx2.zammad.com
by mx2.zammad.com with LMTP
id yOHIAN1gL1y4UwAAABwAvw
(envelope-from <alias@some.tld>)
for <mh@zammad.com>; Fri, 04 Jan 2019 13:34:21 +0000
Received: from mikasa.some.tld (mikasa.some.tld [x.x.x.x])
by mx2.zammad.com (Postfix) with ESMTPS id B89E9A02C6
for <mh@zammad.com>; Fri, 4 Jan 2019 13:34:20 +0000 (UTC)
Received: from EHLO mikasa.some.tld (mikasa.some.tld [x.x.x.x])
by mikasa.some.tld with ESMTPSA
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128)
; Fri, 4 Jan 2019 14:34:17 +0100
From: "Marcel Herrguth" <alias@some.tld>
To: <alias@another.tld>
Subject: Test Attachment Text
Date: Fri, 4 Jan 2019 14:34:17 +0100
Organization: The home of Anime
Message-ID: <ee5c8a7d38e67f4f90504618004c3d9f@some.tld>
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary="----=_NextPart_000_005E_01D4A43B.1C1DD590"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFpQ8vnu4DrfNDtB+p/XwBv/6/XNg==
This is a multipart message in MIME format.
------=_NextPart_000_005E_01D4A43B.1C1DD590
Content-Type: text/plain;
boundary="=_d1b5b47c2bb88af990b40cf2152b3d67";
charset="us-ascii"
Content-Transfer-Encoding: 7bit
680gh4r6s8t
04 h
640trs
684 h06
8rs4t0
640
--
```
Look in Zammad:

Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
attached msg files break filenames in zammad combination exchange and outlook hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version develop installation method source package rpm source operating system any database version any elasticsearch version any browser version any outlook versions being affected for sure outlook connected with exchange ticket id expected behavior when forwarding mails with attachments to zammad zammad will import the attachment exact like the original been send meaning it will keep file extension and file name if applicable actual behavior in some special cases outlook exchange the mail with msg attachment will contain a unique id content id this causes zammad to show the content id instead of the file name and without file extension steps to reproduce the behavior create a e mail with an attached eml file add a line with content id within the part where the message attachment is referenced import this file into zammad for reference here s what the attachment part needs to look like nextpart content type message content transfer encoding content disposition attachment content id received from zammad com by zammad com with lmtp id envelope from for fri jan received from mikasa some tld mikasa some tld by zammad com postfix with esmtps id for fri jan utc received from ehlo mikasa some tld mikasa some tld by mikasa some tld with esmtpsa version cipher ecdhe rsa gcm bits fri jan from marcel herrguth to subject test attachment text date fri jan organization the home of anime message id mime version content type multipart mixed boundary nextpart x mailer microsoft outlook thread index p xwbv xng this is a multipart message in mime format nextpart content type text plain boundary charset us ascii content transfer encoding h look in zammad yes i m sure this is a bug and no feature request or a general question
| 1
|
2,182
| 5,032,103,696
|
IssuesEvent
|
2016-12-16 10:02:29
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Html processor incorrectly processes invalid html
|
SYSTEM: resource processing TYPE: bug
|
```html
<table border='0' class='EditTable' id='TblGrid_domainRecordTablesARecordsGrid_2'>
<tbody>
<tr id='Act_Buttons'>
<td class='navButton ui-widget-content'>
<a href='javascript:void(0)' id='pData' class='fm-button ui-state-default ui-corner-left'>
<span class='ui-icon ui-icon-triangle-1-w'></span></div>
<a href='javascript:void(0)' id='nData' class='fm-button ui-state-default ui-corner-right'>
<span class='ui-icon ui-icon-triangle-1-e'></span></div>
</td>
<td class='EditButton ui-widget-content'>
<a href='javascript:void(0)' id='sData' class='fm-button ui-state-default ui-corner-all'>Submit</a>
<a href='javascript:void(0)' id='cData' class='fm-button ui-state-default ui-corner-all'>Close</a>
</td>
</tr>
<tr style='display:none' class='binfo'>
<td class='bottominfo' colspan='2'></td>
</tr>
</tbody>
</table>
```
|
1.0
|
Html processor incorrectly processes invalid html - ```html
<table border='0' class='EditTable' id='TblGrid_domainRecordTablesARecordsGrid_2'>
<tbody>
<tr id='Act_Buttons'>
<td class='navButton ui-widget-content'>
<a href='javascript:void(0)' id='pData' class='fm-button ui-state-default ui-corner-left'>
<span class='ui-icon ui-icon-triangle-1-w'></span></div>
<a href='javascript:void(0)' id='nData' class='fm-button ui-state-default ui-corner-right'>
<span class='ui-icon ui-icon-triangle-1-e'></span></div>
</td>
<td class='EditButton ui-widget-content'>
<a href='javascript:void(0)' id='sData' class='fm-button ui-state-default ui-corner-all'>Submit</a>
<a href='javascript:void(0)' id='cData' class='fm-button ui-state-default ui-corner-all'>Close</a>
</td>
</tr>
<tr style='display:none' class='binfo'>
<td class='bottominfo' colspan='2'></td>
</tr>
</tbody>
</table>
```
|
process
|
html processor incorrectly processes invalid html html submit close
| 1
|
13,684
| 16,442,174,779
|
IssuesEvent
|
2021-05-20 15:27:02
|
encode/uvicorn
|
https://api.github.com/repos/encode/uvicorn
|
closed
|
uvicorn.run() makes a shell program not killable by CTRL+C
|
multiprocessing
|
I'm developing a command line tool that hosts a uvicorn server, and it would be very convenient to terminate the process with CTRL+C.
For some reason, launching a sevter causes the program to not properly terminate on CTRL+X.
I need to kill it by sending a "kill -9" to the process in order to free the terminal.
=======================
I'm using uvicorn as an embeded server within a command line tool, and
```
INFO: Started server process [2120330]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Finished server process [2120330]
```
=======================
Although the log says the server process is terminated, the programm is still stuck, and needs a kill -9 to terminate
|
1.0
|
uvicorn.run() makes a shell program not killable by CTRL+C - I'm developing a command line tool that hosts a uvicorn server, and it would be very convenient to terminate the process with CTRL+C.
For some reason, launching a sevter causes the program to not properly terminate on CTRL+X.
I need to kill it by sending a "kill -9" to the process in order to free the terminal.
=======================
I'm using uvicorn as an embeded server within a command line tool, and
```
INFO: Started server process [2120330]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Finished server process [2120330]
```
=======================
Although the log says the server process is terminated, the programm is still stuck, and needs a kill -9 to terminate
|
process
|
uvicorn run makes a shell program not killable by ctrl c i m developing a command line tool that hosts a uvicorn server and it would be very convenient to terminate the process with ctrl c for some reason launching a sevter causes the program to not properly terminate on ctrl x i need to kill it by sending a kill to the process in order to free the terminal i m using uvicorn as an embeded server within a command line tool and info started server process info waiting for application startup info asgi lifespan protocol appears unsupported info application startup complete info uvicorn running on press ctrl c to quit cinfo shutting down info finished server process although the log says the server process is terminated the programm is still stuck and needs a kill to terminate
| 1
|
1,884
| 4,712,357,522
|
IssuesEvent
|
2016-10-14 16:29:59
|
material-motion/material-motion-family-pop-swift
|
https://api.github.com/repos/material-motion/material-motion-family-pop-swift
|
closed
|
Cut the v1.0.0 release
|
Process
|
This must be run by a @material-motion/core-team member.
`mdm release cut`
|
1.0
|
Cut the v1.0.0 release - This must be run by a @material-motion/core-team member.
`mdm release cut`
|
process
|
cut the release this must be run by a material motion core team member mdm release cut
| 1
|
20,951
| 27,812,769,330
|
IssuesEvent
|
2023-03-18 10:21:46
|
nextflow-io/nextflow
|
https://api.github.com/repos/nextflow-io/nextflow
|
closed
|
Textual input wildcard.
|
stale lang/processes
|
## New Feature & Justifying Scenario
Scenarios arise where an arbitrary number of same-named files get generated by some process which are then passed as an input to a subsequent process. The present wildcards `*` and `?` solve the problem of uniquely naming the links to or copies of these files in the downstream process, however, I am unaware of the capability of leaving the full name pattern of the files underspecified with a text wildcard.
This is technically possible in the case of `*` for the case of it appearing alone in the file string in an input declaration, and I would love it if this could be made more well-defined either as a separate character or as an extension of that property of `*` to `seq*` form inputs (for BC purposes, I imagine the former is preferable).
Motivation for this, instead of fully specifying the filename with a numeric wildcard only for multiple instances of that filename, is two-fold: firstly, it can be very strongly preferable to retain some unique string in the filename that is only discovered at run-time (e.g. sample ID) for purposes of analysis outside of Nextflow or by subsequent workflows. Secondly, it allows for improved robustness of the wildcard-based input method in the case of using modules you are perhaps not in control of in a context where you might want to use one or other that use slightly different naming conventions - right now this has to be resolved with extra boilerplate for each different naming convention.
## Implementation
Let's say the feature were to use a new character and let's choose to represent this capability with `@`. Consider the following patterns used in an input declaration:
1. `"@.report"`
2. `"*.report"`
3. `"@?.report"`
4. `"@*.report"`
5. `"@???.report"`
6. `"@*"`
7. `"@?"`
8. `"@???"`
The first and second would behave the same for a single file, while the first could either throw the "input file name collision" error for multiple files or continue to behave the same as `*` (I prefer the former, as it provides a more consistent behaviour).
For a single filename, the third and fourth would behave identically and for some number of files named "X.report" would either give as input "X.report" in the case of one file or "X1.report", "X2.report", "X3.report", etc in the case of multiple files. The fifth would do the same but with the usual padding zeroes of '?': "X001.report", "X002.report", "X003.report" etc.
The final three would be for consistency to the limiting case: it shouldn't, in the limit, be necessary to specify anything about the filename, but simply take all filenames passed to the input and simply append the integer as needed. This would, for the case of files with the name "X.report" yield "X.report1", "X.report2", "X.report3", and so on.
This behaviour of `@` would also allow multiple filenames to be passed to the input, say some number of files either named "X.report" or "Y.report". The third and fourth would either give as input "X.report" and "Y.report" in the case of one file of each, or "X1.report", "X2.report", "X3.report", "Y1.report", "Y2.report", etc in the case of multiple files of each. The fifth would do the same but with the usual padding zeroes of '?': "X001.report", "X002.report", "X003.report", "Y001.report", "Y002.report", etc. The final three would look similar but with the incrementing integer placed at the end.
|
1.0
|
Textual input wildcard. - ## New Feature & Justifying Scenario
Scenarios arise where an arbitrary number of same-named files get generated by some process which are then passed as an input to a subsequent process. The present wildcards `*` and `?` solve the problem of uniquely naming the links to or copies of these files in the downstream process, however, I am unaware of the capability of leaving the full name pattern of the files underspecified with a text wildcard.
This is technically possible in the case of `*` for the case of it appearing alone in the file string in an input declaration, and I would love it if this could be made more well-defined either as a separate character or as an extension of that property of `*` to `seq*` form inputs (for BC purposes, I imagine the former is preferable).
Motivation for this, instead of fully specifying the filename with a numeric wildcard only for multiple instances of that filename, is two-fold: firstly, it can be very strongly preferable to retain some unique string in the filename that is only discovered at run-time (e.g. sample ID) for purposes of analysis outside of Nextflow or by subsequent workflows. Secondly, it allows for improved robustness of the wildcard-based input method in the case of using modules you are perhaps not in control of in a context where you might want to use one or other that use slightly different naming conventions - right now this has to be resolved with extra boilerplate for each different naming convention.
## Implementation
Let's say the feature were to use a new character and let's choose to represent this capability with `@`. Consider the following patterns used in an input declaration:
1. `"@.report"`
2. `"*.report"`
3. `"@?.report"`
4. `"@*.report"`
5. `"@???.report"`
6. `"@*"`
7. `"@?"`
8. `"@???"`
The first and second would behave the same for a single file, while the first could either throw the "input file name collision" error for multiple files or continue to behave the same as `*` (I prefer the former, as it provides a more consistent behaviour).
For a single filename, the third and fourth would behave identically and for some number of files named "X.report" would either give as input "X.report" in the case of one file or "X1.report", "X2.report", "X3.report", etc in the case of multiple files. The fifth would do the same but with the usual padding zeroes of '?': "X001.report", "X002.report", "X003.report" etc.
The final three would be for consistency to the limiting case: it shouldn't, in the limit, be necessary to specify anything about the filename, but simply take all filenames passed to the input and simply append the integer as needed. This would, for the case of files with the name "X.report" yield "X.report1", "X.report2", "X.report3", and so on.
This behaviour of `@` would also allow multiple filenames to be passed to the input, say some number of files either named "X.report" or "Y.report". The third and fourth would either give as input "X.report" and "Y.report" in the case of one file of each, or "X1.report", "X2.report", "X3.report", "Y1.report", "Y2.report", etc in the case of multiple files of each. The fifth would do the same but with the usual padding zeroes of '?': "X001.report", "X002.report", "X003.report", "Y001.report", "Y002.report", etc. The final three would look similar but with the incrementing integer placed at the end.
|
process
|
textual input wildcard new feature justifying scenario scenarios arise where an arbitrary number of same named files get generated by some process which are then passed as an input to a subsequent process the present wildcards and solve the problem of uniquely naming the links to or copies of these files in the downstream process however i am unaware of the capability of leaving the full name pattern of the files underspecified with a text wildcard this is technically possible in the case of for the case of it appearing alone in the file string in an input declaration and i would love it if this could be made more well defined either as a separate character or as an extension of that property of to seq form inputs for bc purposes i imagine the former is preferable motivation for this instead of fully specifying the filename with a numeric wildcard only for multiple instances of that filename is two fold firstly it can be very strongly preferable to retain some unique string in the filename that is only discovered at run time e g sample id for purposes of analysis outside of nextflow or by subsequent workflows secondly it allows for improved robustness of the wildcard based input method in the case of using modules you are perhaps not in control of in a context where you might want to use one or other that use slightly different naming conventions right now this has to be resolved with extra boilerplate for each different naming convention implementation let s say the feature were to use a new character and let s choose to represent this capability with consider the following patterns used in an input declaration report report report report report the first and second would behave the same for a single file while the first could either throw the input file name collision error for multiple files or continue to behave the same as i prefer the former as it provides a more consistent behaviour for a single filename the third and fourth would behave identically and for some number of files named x report would either give as input x report in the case of one file or report report report etc in the case of multiple files the fifth would do the same but with the usual padding zeroes of report report report etc the final three would be for consistency to the limiting case it shouldn t in the limit be necessary to specify anything about the filename but simply take all filenames passed to the input and simply append the integer as needed this would for the case of files with the name x report yield x x x and so on this behaviour of would also allow multiple filenames to be passed to the input say some number of files either named x report or y report the third and fourth would either give as input x report and y report in the case of one file of each or report report report report report etc in the case of multiple files of each the fifth would do the same but with the usual padding zeroes of report report report report report etc the final three would look similar but with the incrementing integer placed at the end
| 1
|
172,368
| 13,303,156,232
|
IssuesEvent
|
2020-08-25 15:09:11
|
vgstation-coders/vgstation13
|
https://api.github.com/repos/vgstation-coders/vgstation13
|
closed
|
Hitting a blob with a fireball causes infinite explosions and lags the game to death
|
Bug / Fix Needs Moar Testing ⚠️ OH GOD IT'S LOOSE ⚠️
|
Good thing blob and wizard will never fire together, r-right?
|
1.0
|
Hitting a blob with a fireball causes infinite explosions and lags the game to death - Good thing blob and wizard will never fire together, r-right?
|
non_process
|
hitting a blob with a fireball causes infinite explosions and lags the game to death good thing blob and wizard will never fire together r right
| 0
|
19,868
| 26,280,295,673
|
IssuesEvent
|
2023-01-07 08:01:48
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
AbortController/AbortSignal Triggering 'Error' Event in Child Process
|
child_process good first issue
|
### Version
v18.9.0
### Platform
Linux Mint 20.1
### Subsystem
child_process
### What steps will reproduce the bug?
1. Create a child process with an AbortSignal attached.
2. Abort the child process via AbortSignal before the child process `exit` event
3. This will trigger the child process's `error` event.
The following code will generate the error for `spawn`, `execFile`, and `exec`
```js
const { spawn, exec, execFile } = require("child_process");
const tests = [
{ commandName: "spawn", command: spawn },
{ commandName: "execFile", command: execFile },
{ commandName: "exec", command: exec }
];
(async () => {
for ( let { commandName, command } of tests ) {
const abortController = new AbortController();
await new Promise( resolve => {
// I used 'ls' but the command does not seem to effect the result
const child = command( "ls", { signal: abortController.signal } )
.on( "error", ( err ) =>
console.log({ event: `${commandName}.error`, killed: child.killed, err }) )
.on( "exit", ( code, signal ) => {
console.log({ event: `${commandName}.exit`, code, signal, killed: child.killed });
} )
.on( "close", ( code, signal ) => {
console.log({ event: `${commandName}.close`, code, signal, killed: child.killed });
resolve();
} )
.on( "spawn", () => {
console.log({ event: `${commandName}.spawn`, killed: child.killed });
// child.kill( "SIGTERM" ); // child will NOT cause an 'error' event
abortController.abort(); // child will cause an 'error' event
} );
} );
}
})();
```
### How often does it reproduce? Is there a required condition?
This occurs every time, a child process is aborted via AbortSignal and it has not performed it's exit event.
### What is the expected behavior?
```console
{ event: 'spawn.spawn', killed: false }
{ event: 'spawn.exit', code: null, signal: 'SIGTERM', killed: true }
{ event: 'spawn.close', code: null, signal: 'SIGTERM', killed: true }
...
Repeated for execFile and exec
```
### What do you see instead?
```console
{ event: 'spawn.spawn', killed: false }
{
event: 'spawn.error',
killed: true,
err: AbortError: The operation was aborted
at abortChildProcess (node:child_process:706:27)
at AbortSignal.onAbortListener (node:child_process:776:7)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:731:20)
at AbortSignal.dispatchEvent (node:internal/event_target:673:26)
at abortSignal (node:internal/abort_controller:292:10)
at AbortController.abort (node:internal/abort_controller:322:5)
at ChildProcess.<anonymous> (/index.js:30:29)
at ChildProcess.emit (node:events:513:28)
at onSpawnNT (node:internal/child_process:481:8)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ABORT_ERR'
}
}
{ event: 'spawn.exit', code: 0, signal: null, killed: true }
{ event: 'spawn.close', code: 0, signal: null, killed: true }
...
Repeated for execFile and exec
```
### Additional information
According to the documentation a child process `error` event occurs only on:
> * The process could not be spawned, or
> * The process could not be killed, or
> * Sending a message to the child process failed.
>
> [https://nodejs.org/api/child_process.html#event-error](https://nodejs.org/api/child_process.html#event-error)
None of these conditions are true when the process is aborted via AbortSignal. If the child process is terminated via a `child.kill( "SIGTERM" )` it does not trigger the `error` event.
I am assuming that `AbortSignal.abort()` is suppose to act like `child.kill( "SIGTERM" )` and not that the documentation is out-of-date.
|
1.0
|
AbortController/AbortSignal Triggering 'Error' Event in Child Process - ### Version
v18.9.0
### Platform
Linux Mint 20.1
### Subsystem
child_process
### What steps will reproduce the bug?
1. Create a child process with an AbortSignal attached.
2. Abort the child process via AbortSignal before the child process `exit` event
3. This will trigger the child process's `error` event.
The following code will generate the error for `spawn`, `execFile`, and `exec`
```js
const { spawn, exec, execFile } = require("child_process");
const tests = [
{ commandName: "spawn", command: spawn },
{ commandName: "execFile", command: execFile },
{ commandName: "exec", command: exec }
];
(async () => {
for ( let { commandName, command } of tests ) {
const abortController = new AbortController();
await new Promise( resolve => {
// I used 'ls' but the command does not seem to effect the result
const child = command( "ls", { signal: abortController.signal } )
.on( "error", ( err ) =>
console.log({ event: `${commandName}.error`, killed: child.killed, err }) )
.on( "exit", ( code, signal ) => {
console.log({ event: `${commandName}.exit`, code, signal, killed: child.killed });
} )
.on( "close", ( code, signal ) => {
console.log({ event: `${commandName}.close`, code, signal, killed: child.killed });
resolve();
} )
.on( "spawn", () => {
console.log({ event: `${commandName}.spawn`, killed: child.killed });
// child.kill( "SIGTERM" ); // child will NOT cause an 'error' event
abortController.abort(); // child will cause an 'error' event
} );
} );
}
})();
```
### How often does it reproduce? Is there a required condition?
This occurs every time, a child process is aborted via AbortSignal and it has not performed it's exit event.
### What is the expected behavior?
```console
{ event: 'spawn.spawn', killed: false }
{ event: 'spawn.exit', code: null, signal: 'SIGTERM', killed: true }
{ event: 'spawn.close', code: null, signal: 'SIGTERM', killed: true }
...
Repeated for execFile and exec
```
### What do you see instead?
```console
{ event: 'spawn.spawn', killed: false }
{
event: 'spawn.error',
killed: true,
err: AbortError: The operation was aborted
at abortChildProcess (node:child_process:706:27)
at AbortSignal.onAbortListener (node:child_process:776:7)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:731:20)
at AbortSignal.dispatchEvent (node:internal/event_target:673:26)
at abortSignal (node:internal/abort_controller:292:10)
at AbortController.abort (node:internal/abort_controller:322:5)
at ChildProcess.<anonymous> (/index.js:30:29)
at ChildProcess.emit (node:events:513:28)
at onSpawnNT (node:internal/child_process:481:8)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ABORT_ERR'
}
}
{ event: 'spawn.exit', code: 0, signal: null, killed: true }
{ event: 'spawn.close', code: 0, signal: null, killed: true }
...
Repeated for execFile and exec
```
### Additional information
According to the documentation a child process `error` event occurs only on:
> * The process could not be spawned, or
> * The process could not be killed, or
> * Sending a message to the child process failed.
>
> [https://nodejs.org/api/child_process.html#event-error](https://nodejs.org/api/child_process.html#event-error)
None of these conditions are true when the process is aborted via AbortSignal. If the child process is terminated via a `child.kill( "SIGTERM" )` it does not trigger the `error` event.
I am assuming that `AbortSignal.abort()` is suppose to act like `child.kill( "SIGTERM" )` and not that the documentation is out-of-date.
|
process
|
abortcontroller abortsignal triggering error event in child process version platform linux mint subsystem child process what steps will reproduce the bug create a child process with an abortsignal attached abort the child process via abortsignal before the child process exit event this will trigger the child process s error event the following code will generate the error for spawn execfile and exec js const spawn exec execfile require child process const tests commandname spawn command spawn commandname execfile command execfile commandname exec command exec async for let commandname command of tests const abortcontroller new abortcontroller await new promise resolve i used ls but the command does not seem to effect the result const child command ls signal abortcontroller signal on error err console log event commandname error killed child killed err on exit code signal console log event commandname exit code signal killed child killed on close code signal console log event commandname close code signal killed child killed resolve on spawn console log event commandname spawn killed child killed child kill sigterm child will not cause an error event abortcontroller abort child will cause an error event how often does it reproduce is there a required condition this occurs every time a child process is aborted via abortsignal and it has not performed it s exit event what is the expected behavior console event spawn spawn killed false event spawn exit code null signal sigterm killed true event spawn close code null signal sigterm killed true repeated for execfile and exec what do you see instead console event spawn spawn killed false event spawn error killed true err aborterror the operation was aborted at abortchildprocess node child process at abortsignal onabortlistener node child process at node internal event target at abortsignal dispatchevent node internal event target at abortsignal node internal abort controller at abortcontroller abort node internal abort controller at childprocess index js at childprocess emit node events at onspawnnt node internal child process at process processticksandrejections node internal process task queues code abort err event spawn exit code signal null killed true event spawn close code signal null killed true repeated for execfile and exec additional information according to the documentation a child process error event occurs only on the process could not be spawned or the process could not be killed or sending a message to the child process failed none of these conditions are true when the process is aborted via abortsignal if the child process is terminated via a child kill sigterm it does not trigger the error event i am assuming that abortsignal abort is suppose to act like child kill sigterm and not that the documentation is out of date
| 1
|
19,516
| 25,828,853,807
|
IssuesEvent
|
2022-12-12 14:47:57
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Integrated Terminal Opens wrong folder from Terminal Pane Button
|
bug terminal-process
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.69.2 (Universal)
- OS Version: MacOS Monterey
Steps to Reproduce:
1. Set a value for Terminal > Integrated: Cwd
2. Click the "+" in the integrated terminal pane
3. Dropdown at the top asks you to select the folder to open in the integrated terminal
4. Terminal tab opens, but at the default directory for the workspace, not the one selected
Note - right clicking the folder from the explorer pane and selecting "Open in Integrated Terminal" works as expected
Seems to have started since the 1.69.2 release.
https://user-images.githubusercontent.com/8314873/181294070-c6c8df36-6550-4530-aa01-c10b1382fe7a.mov
|
1.0
|
Integrated Terminal Opens wrong folder from Terminal Pane Button - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.69.2 (Universal)
- OS Version: MacOS Monterey
Steps to Reproduce:
1. Set a value for Terminal > Integrated: Cwd
2. Click the "+" in the integrated terminal pane
3. Dropdown at the top asks you to select the folder to open in the integrated terminal
4. Terminal tab opens, but at the default directory for the workspace, not the one selected
Note - right clicking the folder from the explorer pane and selecting "Open in Integrated Terminal" works as expected
Seems to have started since the 1.69.2 release.
https://user-images.githubusercontent.com/8314873/181294070-c6c8df36-6550-4530-aa01-c10b1382fe7a.mov
|
process
|
integrated terminal opens wrong folder from terminal pane button does this issue occur when all extensions are disabled yes report issue dialog can assist with this vs code version universal os version macos monterey steps to reproduce set a value for terminal integrated cwd click the in the integrated terminal pane dropdown at the top asks you to select the folder to open in the integrated terminal terminal tab opens but at the default directory for the workspace not the one selected note right clicking the folder from the explorer pane and selecting open in integrated terminal works as expected seems to have started since the release
| 1
|
309,391
| 9,474,183,336
|
IssuesEvent
|
2019-04-19 06:17:29
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.facebook.com - see bug description
|
browser-firefox priority-critical
|
<!-- @browser: Firefox 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/
**Browser / Version**: Firefox 66.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: The FB newsfeed is blank, in Firefox. It shows up using Chrome browser.
**Steps to Reproduce**:
My News Feed does not appear using the Firefox browser. It works fine using the Chrome browser.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.facebook.com - see bug description - <!-- @browser: Firefox 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/
**Browser / Version**: Firefox 66.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: The FB newsfeed is blank, in Firefox. It shows up using Chrome browser.
**Steps to Reproduce**:
My News Feed does not appear using the Firefox browser. It works fine using the Chrome browser.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox operating system windows tested another browser yes problem type something else description the fb newsfeed is blank in firefox it shows up using chrome browser steps to reproduce my news feed does not appear using the firefox browser it works fine using the chrome browser browser configuration none from with ❤️
| 0
|
12,823
| 15,207,492,360
|
IssuesEvent
|
2021-02-17 00:19:25
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Upgrade AWS Go SDK to v2
|
epic p2 team:core infra team:data analytics team:data processing team:security engineering
|
### Description
AWS Go SDK v2 is now [generally available](https://aws.amazon.com/about-aws/whats-new/2021/01/aws-sdk-for-go-version-2-now-generally-available/)!
There are substantial performance improvements in the new library (which, in turn, lead to cost savings), so it's definitely worth our time to switch sooner than later. We'll wait a few weeks for them to patch any issues before we start upgrading Panther services. The upgrade need not happen all at once - each service owner can migrate their own service when they have a few spare cycles to do so.
This epic is not necessarily anyone's dedicated responsibility yet; it's just a convenient place to track which services have been upgraded and any roadblocks we encounter along the way.
See [migration docs](https://aws.github.io/aws-sdk-go-v2/docs/migrating/)
### Acceptance Criteria
- Eventually, all aws go sdk imports are switched to `github.com/aws/aws-sdk-go-v2`
Check off each service once the migration is complete:
#### `cmd`
- [ ] `devtools`
- [ ] `opstools`
#### `internal/compliance`
- [ ] `alert-forwarder`
- [ ] `alert-processor`
- [ ] `aws-event-processor`
- [ ] `compliance-api`
- [ ] `datalake-forwarder`
- [ ] `remediation-api`
- [ ] `remediation-processor`
- [ ] `resources-api`
- [ ] `snapshot-poller`
- [ ] `snapshot-scheduler`
#### `internal/core`
- [ ] `alert-delivery`
- [ ] `analysis-api`
- [ ] `custom-resources`
- [ ] `database-api` (enterprise)
- [ ] `layer-manager`
- [ ] `logtypesapi`
- [ ] `metrics-api`
- [ ] `organization-api`
- [ ] `outputs-api`
- [ ] `source-api`
- [ ] `users-api`
#### `internal/log-analysis`
- [ ] `alert-forwarder`
- [ ] `alerts-api`
- [ ] `datacatalog-compactor` (enterprise)
- [ ] `datacatalog-compactor-reaper` (enterprise)
- [ ] `datacatalog-updater`
- [ ] `log-processor`
- [ ] `log-puller` (enterprise)
- [ ] `message-forwarder`
#### `pkg`
- [ ] `awsathena`
- [ ] `awsbatch`
- [ ] `awscfn`
- [ ] `awscostexplorer`
- [ ] `awsretry`
- [ ] `awssqs`
- [ ] `awsutils`
- [x] `box` (N/A)
- [ ] `encryption`
- [x] `extract` (N/A)
- [ ] `gatewayapi`
- [ ] `genericapi`
- [x] `lambdalogger` (N/A)
- [x] `metrics` (N/A)
- [x] `oplog`(N/A)
- [x] `priorityq` (N/A)
- [x] `prompt` (N/A)
- [x] `shutil` (N/A)
- [x] `stringset` (N/A)
- [ ] `testutils`
- [x] `unbox` (N/A)
- [ ] `x`
#### `tools`
- [ ] `mage`
|
1.0
|
Upgrade AWS Go SDK to v2 - ### Description
AWS Go SDK v2 is now [generally available](https://aws.amazon.com/about-aws/whats-new/2021/01/aws-sdk-for-go-version-2-now-generally-available/)!
There are substantial performance improvements in the new library (which, in turn, lead to cost savings), so it's definitely worth our time to switch sooner than later. We'll wait a few weeks for them to patch any issues before we start upgrading Panther services. The upgrade need not happen all at once - each service owner can migrate their own service when they have a few spare cycles to do so.
This epic is not necessarily anyone's dedicated responsibility yet; it's just a convenient place to track which services have been upgraded and any roadblocks we encounter along the way.
See [migration docs](https://aws.github.io/aws-sdk-go-v2/docs/migrating/)
### Acceptance Criteria
- Eventually, all aws go sdk imports are switched to `github.com/aws/aws-sdk-go-v2`
Check off each service once the migration is complete:
#### `cmd`
- [ ] `devtools`
- [ ] `opstools`
#### `internal/compliance`
- [ ] `alert-forwarder`
- [ ] `alert-processor`
- [ ] `aws-event-processor`
- [ ] `compliance-api`
- [ ] `datalake-forwarder`
- [ ] `remediation-api`
- [ ] `remediation-processor`
- [ ] `resources-api`
- [ ] `snapshot-poller`
- [ ] `snapshot-scheduler`
#### `internal/core`
- [ ] `alert-delivery`
- [ ] `analysis-api`
- [ ] `custom-resources`
- [ ] `database-api` (enterprise)
- [ ] `layer-manager`
- [ ] `logtypesapi`
- [ ] `metrics-api`
- [ ] `organization-api`
- [ ] `outputs-api`
- [ ] `source-api`
- [ ] `users-api`
#### `internal/log-analysis`
- [ ] `alert-forwarder`
- [ ] `alerts-api`
- [ ] `datacatalog-compactor` (enterprise)
- [ ] `datacatalog-compactor-reaper` (enterprise)
- [ ] `datacatalog-updater`
- [ ] `log-processor`
- [ ] `log-puller` (enterprise)
- [ ] `message-forwarder`
#### `pkg`
- [ ] `awsathena`
- [ ] `awsbatch`
- [ ] `awscfn`
- [ ] `awscostexplorer`
- [ ] `awsretry`
- [ ] `awssqs`
- [ ] `awsutils`
- [x] `box` (N/A)
- [ ] `encryption`
- [x] `extract` (N/A)
- [ ] `gatewayapi`
- [ ] `genericapi`
- [x] `lambdalogger` (N/A)
- [x] `metrics` (N/A)
- [x] `oplog`(N/A)
- [x] `priorityq` (N/A)
- [x] `prompt` (N/A)
- [x] `shutil` (N/A)
- [x] `stringset` (N/A)
- [ ] `testutils`
- [x] `unbox` (N/A)
- [ ] `x`
#### `tools`
- [ ] `mage`
|
process
|
upgrade aws go sdk to description aws go sdk is now there are substantial performance improvements in the new library which in turn lead to cost savings so it s definitely worth our time to switch sooner than later we ll wait a few weeks for them to patch any issues before we start upgrading panther services the upgrade need not happen all at once each service owner can migrate their own service when they have a few spare cycles to do so this epic is not necessarily anyone s dedicated responsibility yet it s just a convenient place to track which services have been upgraded and any roadblocks we encounter along the way see acceptance criteria eventually all aws go sdk imports are switched to github com aws aws sdk go check off each service once the migration is complete cmd devtools opstools internal compliance alert forwarder alert processor aws event processor compliance api datalake forwarder remediation api remediation processor resources api snapshot poller snapshot scheduler internal core alert delivery analysis api custom resources database api enterprise layer manager logtypesapi metrics api organization api outputs api source api users api internal log analysis alert forwarder alerts api datacatalog compactor enterprise datacatalog compactor reaper enterprise datacatalog updater log processor log puller enterprise message forwarder pkg awsathena awsbatch awscfn awscostexplorer awsretry awssqs awsutils box n a encryption extract n a gatewayapi genericapi lambdalogger n a metrics n a oplog n a priorityq n a prompt n a shutil n a stringset n a testutils unbox n a x tools mage
| 1
|
281,427
| 24,392,529,530
|
IssuesEvent
|
2022-10-04 16:18:36
|
uclab-potsdam/klimataz
|
https://api.github.com/repos/uclab-potsdam/klimataz
|
opened
|
larger hitbox for heating lines
|
enhancement feedback from testing vis
|
it is hard to click on the lines in the new buildings chart to switch the energy
https://user-images.githubusercontent.com/73465311/193872128-17b31ea5-dc2f-4b23-a675-5cf7e4830e41.mov
|
1.0
|
larger hitbox for heating lines - it is hard to click on the lines in the new buildings chart to switch the energy
https://user-images.githubusercontent.com/73465311/193872128-17b31ea5-dc2f-4b23-a675-5cf7e4830e41.mov
|
non_process
|
larger hitbox for heating lines it is hard to click on the lines in the new buildings chart to switch the energy
| 0
|
412,847
| 12,056,975,328
|
IssuesEvent
|
2020-04-15 15:11:15
|
georchestra/mapstore2-georchestra
|
https://api.github.com/repos/georchestra/mapstore2-georchestra
|
closed
|
Extension Manager - Extension upload UI
|
Accepted Priority: Medium
|
The MapStore UI needs to provide an utility that allows the administrator the possibility to upload an extension package.
- A simple UI must be provided with the possibility to select a local archive (the extension package) and provide feedbacks during the upload process: a spinner inform the user that the upload is in progress. Evaluate the possibility to have a progress bar to check the state of the upload process
- Only one extension package at time can be uploaded
|
1.0
|
Extension Manager - Extension upload UI - The MapStore UI needs to provide an utility that allows the administrator the possibility to upload an extension package.
- A simple UI must be provided with the possibility to select a local archive (the extension package) and provide feedbacks during the upload process: a spinner inform the user that the upload is in progress. Evaluate the possibility to have a progress bar to check the state of the upload process
- Only one extension package at time can be uploaded
|
non_process
|
extension manager extension upload ui the mapstore ui needs to provide an utility that allows the administrator the possibility to upload an extension package a simple ui must be provided with the possibility to select a local archive the extension package and provide feedbacks during the upload process a spinner inform the user that the upload is in progress evaluate the possibility to have a progress bar to check the state of the upload process only one extension package at time can be uploaded
| 0
|
9,605
| 12,545,285,138
|
IssuesEvent
|
2020-06-05 18:40:21
|
google/ground-platform
|
https://api.github.com/repos/google/ground-platform
|
opened
|
[Process] Robust automated testing in place
|
type: process
|
- [x] Automated testing run via CI
- [ ] Automated UI testing run via CI
- [ ] Acceptable unit test coverage
- [ ] Acceptable UI test coverage
|
1.0
|
[Process] Robust automated testing in place - - [x] Automated testing run via CI
- [ ] Automated UI testing run via CI
- [ ] Acceptable unit test coverage
- [ ] Acceptable UI test coverage
|
process
|
robust automated testing in place automated testing run via ci automated ui testing run via ci acceptable unit test coverage acceptable ui test coverage
| 1
|
4,005
| 6,829,794,277
|
IssuesEvent
|
2017-11-09 02:25:58
|
pearson-ux/pearson-glp-pl
|
https://api.github.com/repos/pearson-ux/pearson-glp-pl
|
opened
|
Colors in Firefox
|
Browser Compatibility - Firefox Minor
|
So the colors in Firefox don't exactly match the colors in Chrome. It's not a big deal but I would check "$secondary-three" because it looks grayer than the "digital ice blue," and main turquoise and brackish turquoise look a little too close (maybe this needs a tweak the contrast)...
CHROME
$primary 137a9a digital pearson blue
$primary-two 095b6f ink blue
$primary-three ffffff white
$primary-four ff0000 white gray (lightest gray)
$secondary feb736 sunshine yellow (yellow)
$secondary-two fd9930 sunflower yellow (yellow-orange)
$secondary-three d6ebe8 digital ice blue (light-mint-green)
$secondary-four 26a5a3 digital main turquoise (SPELLED MAIN incorrectly)
$secondary-five 229598 brackish turquoise
$neutral c7c7c7 concrete (lighter gray)
$neutral-two 252525 charcoal (almost charcoal-black)
$neutral-three 6a7070 medium gray (md gray with hint of green)
$neutral-four d9d9d9 alto (lighter than light gray)
$neutral-five e9e9e9 moonlight (lighter gray than neutral-four)
$condition-one d80a29 strawberry red (red)
$condition-two 11813c digital grass green (green)
$condition-three d71474 hot-pink
[$condition-four 26a5a3 brighter teal [later on missing]]
--
Firefox
$primary 0b73a1 digital pearson blue***
$primary-two 035570 ink blue***
$primary-three ffffff white
$primary-four f9f8f9 white gray (lightest gray)***
$secondary ffb638 sunshine yellow (yellow)***
$secondary-two ff9d2f sunflower yellow (yellow-orange)***
$secondary-three daeae8 digital ice blue (light-mint-green)***
$secondary-four 1e99a7 digital main turquoise (SPELLED MAIN incorrectly)
$secondary-five 1c8b9e brackish turquoise
$neutral ccc9ca concrete (lighter gray)***
$neutral-two 292627 almost charcoal-black***
$neutral-three 6a7070 medium gray with hint of green***
$neutral-four d8dddd lighter than light gray***
$neutral-five eaebec lighter gray than neutral-four***
$condition-one eb1d2a strawberry red (red)***
$condition-two 086f35 digital grass green (green)***
$condition-three e92676 hot-pink***
[$condition-four THIS IS MISSING brighter teal [later on missing]]] check again
--
Chrome palette
<img width="1172" alt="colors_chrome_02" src="https://user-images.githubusercontent.com/9653426/32585403-26205c34-c4b2-11e7-8965-d3f5e4004632.png">
Firefox palette
<img width="1134" alt="colors_firefox56" src="https://user-images.githubusercontent.com/9653426/32585416-348e61da-c4b2-11e7-9b4e-166bb46938d9.png">
|
True
|
Colors in Firefox - So the colors in Firefox don't exactly match the colors in Chrome. It's not a big deal but I would check "$secondary-three" because it looks grayer than the "digital ice blue," and main turquoise and brackish turquoise look a little too close (maybe this needs a tweak the contrast)...
CHROME
$primary 137a9a digital pearson blue
$primary-two 095b6f ink blue
$primary-three ffffff white
$primary-four ff0000 white gray (lightest gray)
$secondary feb736 sunshine yellow (yellow)
$secondary-two fd9930 sunflower yellow (yellow-orange)
$secondary-three d6ebe8 digital ice blue (light-mint-green)
$secondary-four 26a5a3 digital main turquoise (SPELLED MAIN incorrectly)
$secondary-five 229598 brackish turquoise
$neutral c7c7c7 concrete (lighter gray)
$neutral-two 252525 charcoal (almost charcoal-black)
$neutral-three 6a7070 medium gray (md gray with hint of green)
$neutral-four d9d9d9 alto (lighter than light gray)
$neutral-five e9e9e9 moonlight (lighter gray than neutral-four)
$condition-one d80a29 strawberry red (red)
$condition-two 11813c digital grass green (green)
$condition-three d71474 hot-pink
[$condition-four 26a5a3 brighter teal [later on missing]]
--
Firefox
$primary 0b73a1 digital pearson blue***
$primary-two 035570 ink blue***
$primary-three ffffff white
$primary-four f9f8f9 white gray (lightest gray)***
$secondary ffb638 sunshine yellow (yellow)***
$secondary-two ff9d2f sunflower yellow (yellow-orange)***
$secondary-three daeae8 digital ice blue (light-mint-green)***
$secondary-four 1e99a7 digital main turquoise (SPELLED MAIN incorrectly)
$secondary-five 1c8b9e brackish turquoise
$neutral ccc9ca concrete (lighter gray)***
$neutral-two 292627 almost charcoal-black***
$neutral-three 6a7070 medium gray with hint of green***
$neutral-four d8dddd lighter than light gray***
$neutral-five eaebec lighter gray than neutral-four***
$condition-one eb1d2a strawberry red (red)***
$condition-two 086f35 digital grass green (green)***
$condition-three e92676 hot-pink***
[$condition-four THIS IS MISSING brighter teal [later on missing]]] check again
--
Chrome palette
<img width="1172" alt="colors_chrome_02" src="https://user-images.githubusercontent.com/9653426/32585403-26205c34-c4b2-11e7-8965-d3f5e4004632.png">
Firefox palette
<img width="1134" alt="colors_firefox56" src="https://user-images.githubusercontent.com/9653426/32585416-348e61da-c4b2-11e7-9b4e-166bb46938d9.png">
|
non_process
|
colors in firefox so the colors in firefox don t exactly match the colors in chrome it s not a big deal but i would check secondary three because it looks grayer than the digital ice blue and main turquoise and brackish turquoise look a little too close maybe this needs a tweak the contrast chrome primary digital pearson blue primary two ink blue primary three ffffff white primary four white gray lightest gray secondary sunshine yellow yellow secondary two sunflower yellow yellow orange secondary three digital ice blue light mint green secondary four digital main turquoise spelled main incorrectly secondary five brackish turquoise neutral concrete lighter gray neutral two charcoal almost charcoal black neutral three medium gray md gray with hint of green neutral four alto lighter than light gray neutral five moonlight lighter gray than neutral four condition one strawberry red red condition two digital grass green green condition three hot pink firefox primary digital pearson blue primary two ink blue primary three ffffff white primary four white gray lightest gray secondary sunshine yellow yellow secondary two sunflower yellow yellow orange secondary three digital ice blue light mint green secondary four digital main turquoise spelled main incorrectly secondary five brackish turquoise neutral concrete lighter gray neutral two almost charcoal black neutral three medium gray with hint of green neutral four lighter than light gray neutral five eaebec lighter gray than neutral four condition one strawberry red red condition two digital grass green green condition three hot pink check again chrome palette img width alt colors chrome src firefox palette img width alt colors src
| 0
|
228,314
| 18,170,148,872
|
IssuesEvent
|
2021-09-27 18:59:37
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Test: Proposed `keepScrollPosition: boolean` in quickpick API
|
testplan-item
|
Refs: #132068
- [ ] anyOS
- [ ] anyOS
Complexity: 3
---
This iteration we added a new property on the `QuickPick` object that allows you to control whether or not the scroll position (`cursorTop`) in the quickpick moves back to the top of the list.
* Create a new extension
* Allow it to use the proposed API: `npx vscode-dts dev`
* using the `QuickPick` object you get back from the `window.createQuickPick()` API, play with the new `keepScrollPosition: boolean`
Usecases:
* Implementing a "remove this item from the list" [using the also proposed `QuickPickItemButton`s](https://github.com/microsoft/vscode/pull/130519) (think, Ctrl/Cmd + P `x` QuickPickItemButton)
* Implementing a "toggle this item in someway" [using the also proposed `QuickPickItemButton`s](https://github.com/microsoft/vscode/pull/130519) (think, "Insert Snippet" command)
* Asynchronously loading items in the quickpick (like `setInterval` adding an item to the list by reassigning the `.items` property... scroll shouldn't jump to the top)
|
1.0
|
Test: Proposed `keepScrollPosition: boolean` in quickpick API - Refs: #132068
- [ ] anyOS
- [ ] anyOS
Complexity: 3
---
This iteration we added a new property on the `QuickPick` object that allows you to control whether or not the scroll position (`cursorTop`) in the quickpick moves back to the top of the list.
* Create a new extension
* Allow it to use the proposed API: `npx vscode-dts dev`
* using the `QuickPick` object you get back from the `window.createQuickPick()` API, play with the new `keepScrollPosition: boolean`
Usecases:
* Implementing a "remove this item from the list" [using the also proposed `QuickPickItemButton`s](https://github.com/microsoft/vscode/pull/130519) (think, Ctrl/Cmd + P `x` QuickPickItemButton)
* Implementing a "toggle this item in someway" [using the also proposed `QuickPickItemButton`s](https://github.com/microsoft/vscode/pull/130519) (think, "Insert Snippet" command)
* Asynchronously loading items in the quickpick (like `setInterval` adding an item to the list by reassigning the `.items` property... scroll shouldn't jump to the top)
|
non_process
|
test proposed keepscrollposition boolean in quickpick api refs anyos anyos complexity this iteration we added a new property on the quickpick object that allows you to control whether or not the scroll position cursortop in the quickpick moves back to the top of the list create a new extension allow it to use the proposed api npx vscode dts dev using the quickpick object you get back from the window createquickpick api play with the new keepscrollposition boolean usecases implementing a remove this item from the list think ctrl cmd p x quickpickitembutton implementing a toggle this item in someway think insert snippet command asynchronously loading items in the quickpick like setinterval adding an item to the list by reassigning the items property scroll shouldn t jump to the top
| 0
|
5,665
| 8,550,725,432
|
IssuesEvent
|
2018-11-07 16:13:08
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
opened
|
Add regex operator to the text processor
|
enhancement good first issue help wanted processors
|
It would be nice to have a text processor operator for simple execution of regular expressions, we currently already have a replace with regexp operator: https://github.com/Jeffail/benthos/tree/master/docs/processors#replace_regexp, but it would also be good to have a simpler operator to simply return the result of applying the expression.
I'm labeling this as a good first issue as it should be a simple matter of mostly copying the existing function from here: https://github.com/Jeffail/benthos/blob/master/lib/processor/text.go#L170
|
1.0
|
Add regex operator to the text processor - It would be nice to have a text processor operator for simple execution of regular expressions, we currently already have a replace with regexp operator: https://github.com/Jeffail/benthos/tree/master/docs/processors#replace_regexp, but it would also be good to have a simpler operator to simply return the result of applying the expression.
I'm labeling this as a good first issue as it should be a simple matter of mostly copying the existing function from here: https://github.com/Jeffail/benthos/blob/master/lib/processor/text.go#L170
|
process
|
add regex operator to the text processor it would be nice to have a text processor operator for simple execution of regular expressions we currently already have a replace with regexp operator but it would also be good to have a simpler operator to simply return the result of applying the expression i m labeling this as a good first issue as it should be a simple matter of mostly copying the existing function from here
| 1
|
18,412
| 24,544,331,780
|
IssuesEvent
|
2022-10-12 07:35:36
|
kitspace/kitspace-v2
|
https://api.github.com/repos/kitspace/kitspace-v2
|
opened
|
Move reamde e2e tests to the processor
|
enhancement e2e processor
|
I think all the tests for the readme can be done in the processor tests no need for doing it in e2e.
|
1.0
|
Move reamde e2e tests to the processor - I think all the tests for the readme can be done in the processor tests no need for doing it in e2e.
|
process
|
move reamde tests to the processor i think all the tests for the readme can be done in the processor tests no need for doing it in
| 1
|
10,937
| 13,750,982,948
|
IssuesEvent
|
2020-10-06 12:48:05
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
closed
|
Can this paper be extracted?
|
process request
|
This looks like a book chapter and we don't know if it has DOI. Can this paper be extracted?
https://drive.google.com/file/d/1UInNhV_JMym5kjkhUvUO8yFyyCmUy1jM/view?usp=sharing
|
1.0
|
Can this paper be extracted? - This looks like a book chapter and we don't know if it has DOI. Can this paper be extracted?
https://drive.google.com/file/d/1UInNhV_JMym5kjkhUvUO8yFyyCmUy1jM/view?usp=sharing
|
process
|
can this paper be extracted this looks like a book chapter and we don t know if it has doi can this paper be extracted
| 1
|
55,571
| 23,505,607,902
|
IssuesEvent
|
2022-08-18 12:22:51
|
hashicorp/terraform-provider-aws
|
https://api.github.com/repos/hashicorp/terraform-provider-aws
|
closed
|
Cannot create tag policy in AWS organisation
|
bug service/organizations
|
I was trying to create a tag policy in aws organisations using the below configuration and getting the following error.
Appreciate if anyone can help me out to know what's wrong with this configuration.
Thank you!
❯ terraform version
Terraform v1.1.5
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.26.0
**configuration:**
```hcl
resource "aws_organizations_policy" "root-tag-policy" {
name = "RootTagPolicy"
type = "TAG_POLICY"
content = <<CONTENT
{
"tags": {
"abc": {
"tag_key": {
"@@assign": "abc",
"@@operators_allowed_for_child_policies": [ "@@none" ]
},
"tag_value": { "@@assign": "abc" }
}
}
}
CONTENT
}
```
Those are not my actual tag key and value, I've just added here for syntax reference.
logs:
```shell
aws_organizations_policy.root-tag-policy: Creating...
2022-08-16T20:55:29.875+0530 [INFO] Starting apply for aws_organizations_policy.root-tag-policy
2022-08-16T20:55:29.875+0530 [DEBUG] aws_organizations_policy.root-tag-policy: applying the planned Create change
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] setting computed for "tags_all" from ComputedKeys
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] Creating Organizations Policy (RootTagPolicy): {
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Content: "",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Description: "",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Name: "RootTagPolicy",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Tags: [],
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Type: "TAG_POLICY"
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: }
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] Waiting for state to become: [success]
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Validate Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Build Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Sign Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [ERROR] provider.terraform-provider-aws_v4.26.0_x5: Response contains error diagnostic: diagnostic_detail= diagnostic_severity=ERROR tf_req_id=d2b0a165-5dcb-a5b3-fe38-b9d9d32c2b93 tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-go@v0.13.0/tfprotov5/internal/diag/diagnostics.go:56 @module=sdk.proto diagnostic_summary="error creating Organizations Policy (RootTagPolicy): InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreatePolicyInput.Content.
" tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/hashicorp/aws tf_resource_type=aws_organizations_policy timestamp=2022-08-16T20:55:29.876+0530
2022-08-16T20:55:29.897+0530 [ERROR] vertex "aws_organizations_policy.root-tag-policy" error: error creating Organizations Policy (RootTagPolicy): InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreatePolicyInput.Content.
```
|
1.0
|
Cannot create tag policy in AWS organisation - I was trying to create a tag policy in aws organisations using the below configuration and getting the following error.
Appreciate if anyone can help me out to know what's wrong with this configuration.
Thank you!
❯ terraform version
Terraform v1.1.5
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.26.0
**configuration:**
```hcl
resource "aws_organizations_policy" "root-tag-policy" {
name = "RootTagPolicy"
type = "TAG_POLICY"
content = <<CONTENT
{
"tags": {
"abc": {
"tag_key": {
"@@assign": "abc",
"@@operators_allowed_for_child_policies": [ "@@none" ]
},
"tag_value": { "@@assign": "abc" }
}
}
}
CONTENT
}
```
Those are not my actual tag key and value, I've just added here for syntax reference.
logs:
```shell
aws_organizations_policy.root-tag-policy: Creating...
2022-08-16T20:55:29.875+0530 [INFO] Starting apply for aws_organizations_policy.root-tag-policy
2022-08-16T20:55:29.875+0530 [DEBUG] aws_organizations_policy.root-tag-policy: applying the planned Create change
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] setting computed for "tags_all" from ComputedKeys
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] Creating Organizations Policy (RootTagPolicy): {
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Content: "",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Description: "",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Name: "RootTagPolicy",
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Tags: [],
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: Type: "TAG_POLICY"
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: }
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] Waiting for state to become: [success]
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Validate Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Build Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: [DEBUG] [aws-sdk-go] DEBUG: Sign Request organizations/CreatePolicy failed, not retrying, error InvalidParameter: 1 validation error(s) found.
2022-08-16T20:55:29.876+0530 [DEBUG] provider.terraform-provider-aws_v4.26.0_x5: - minimum field size of 1, CreatePolicyInput.Content.
2022-08-16T20:55:29.876+0530 [ERROR] provider.terraform-provider-aws_v4.26.0_x5: Response contains error diagnostic: diagnostic_detail= diagnostic_severity=ERROR tf_req_id=d2b0a165-5dcb-a5b3-fe38-b9d9d32c2b93 tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-go@v0.13.0/tfprotov5/internal/diag/diagnostics.go:56 @module=sdk.proto diagnostic_summary="error creating Organizations Policy (RootTagPolicy): InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreatePolicyInput.Content.
" tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/hashicorp/aws tf_resource_type=aws_organizations_policy timestamp=2022-08-16T20:55:29.876+0530
2022-08-16T20:55:29.897+0530 [ERROR] vertex "aws_organizations_policy.root-tag-policy" error: error creating Organizations Policy (RootTagPolicy): InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreatePolicyInput.Content.
```
|
non_process
|
cannot create tag policy in aws organisation i was trying to create a tag policy in aws organisations using the below configuration and getting the following error appreciate if anyone can help me out to know what s wrong with this configuration thank you ❯ terraform version terraform on darwin provider registry terraform io hashicorp aws configuration hcl resource aws organizations policy root tag policy name roottagpolicy type tag policy content content tags abc tag key assign abc operators allowed for child policies tag value assign abc content those are not my actual tag key and value i ve just added here for syntax reference logs shell aws organizations policy root tag policy creating starting apply for aws organizations policy root tag policy aws organizations policy root tag policy applying the planned create change provider terraform provider aws setting computed for tags all from computedkeys provider terraform provider aws creating organizations policy roottagpolicy provider terraform provider aws content provider terraform provider aws description provider terraform provider aws name roottagpolicy provider terraform provider aws tags provider terraform provider aws type tag policy provider terraform provider aws provider terraform provider aws waiting for state to become provider terraform provider aws debug validate request organizations createpolicy failed not retrying error invalidparameter validation error s found provider terraform provider aws minimum field size of createpolicyinput content provider terraform provider aws debug build request organizations createpolicy failed not retrying error invalidparameter validation error s found provider terraform provider aws minimum field size of createpolicyinput content provider terraform provider aws debug sign request organizations createpolicy failed not retrying error invalidparameter validation error s found provider terraform provider aws minimum field size of createpolicyinput content provider terraform provider aws response contains error diagnostic diagnostic detail diagnostic severity error tf req id tf rpc applyresourcechange caller github com hashicorp terraform plugin go internal diag diagnostics go module sdk proto diagnostic summary error creating organizations policy roottagpolicy invalidparameter validation error s found minimum field size of createpolicyinput content tf proto version tf provider addr registry terraform io hashicorp aws tf resource type aws organizations policy timestamp vertex aws organizations policy root tag policy error error creating organizations policy roottagpolicy invalidparameter validation error s found minimum field size of createpolicyinput content
| 0
|
53,862
| 13,218,733,285
|
IssuesEvent
|
2020-08-17 09:15:07
|
pybind/pybind11
|
https://api.github.com/repos/pybind/pybind11
|
closed
|
pybind11_add_module unexpected error after PR #2368
|
build system issue
|
## Issue description
After integrating latest PR #2368 on our library, building started producing errors:
```
You have called ADD_LIBRARY for library '' without any source files. ...
```
After looking into the documentation, I could not find the issue, but looking into the PR, I saw that when using `pybind11_add_module`, it explicitly uses the keyword `MODULE`, and from what I understood in the documentation
> MODULE or SHARED may be given to specify the type of library. If no type is given, MODULE is used by default which ensures the creation of a Python-exclusive module.
I don't know if it isn't any more the case. and now the `MODULE` keyword **must** be used?
Because when I use the keyword `MODULE` everything compiles just fine.
I tested `pybind11` before the last merge of #2368 and without using the keyword `MODULE` it compiles just fine.
## Reproducible example code
Maybe I'm crazy but the simple example from the [documentation](https://pybind11.readthedocs.io/en/latest/compiling.html#building-with-cmake) does not work on my machine:
```cmake
cmake_minimum_required(VERSION 3.7)
project(example)
add_subdirectory(pybind11)
pybind11_add_module(example my_file.cpp)
```
Thank you for the great library,
Julián
|
1.0
|
pybind11_add_module unexpected error after PR #2368 - ## Issue description
After integrating latest PR #2368 on our library, building started producing errors:
```
You have called ADD_LIBRARY for library '' without any source files. ...
```
After looking into the documentation, I could not find the issue, but looking into the PR, I saw that when using `pybind11_add_module`, it explicitly uses the keyword `MODULE`, and from what I understood in the documentation
> MODULE or SHARED may be given to specify the type of library. If no type is given, MODULE is used by default which ensures the creation of a Python-exclusive module.
I don't know if it isn't any more the case. and now the `MODULE` keyword **must** be used?
Because when I use the keyword `MODULE` everything compiles just fine.
I tested `pybind11` before the last merge of #2368 and without using the keyword `MODULE` it compiles just fine.
## Reproducible example code
Maybe I'm crazy but the simple example from the [documentation](https://pybind11.readthedocs.io/en/latest/compiling.html#building-with-cmake) does not work on my machine:
```cmake
cmake_minimum_required(VERSION 3.7)
project(example)
add_subdirectory(pybind11)
pybind11_add_module(example my_file.cpp)
```
Thank you for the great library,
Julián
|
non_process
|
add module unexpected error after pr issue description after integrating latest pr on our library building started producing errors you have called add library for library without any source files after looking into the documentation i could not find the issue but looking into the pr i saw that when using add module it explicitly uses the keyword module and from what i understood in the documentation module or shared may be given to specify the type of library if no type is given module is used by default which ensures the creation of a python exclusive module i don t know if it isn t any more the case and now the module keyword must be used because when i use the keyword module everything compiles just fine i tested before the last merge of and without using the keyword module it compiles just fine reproducible example code maybe i m crazy but the simple example from the does not work on my machine cmake cmake minimum required version project example add subdirectory add module example my file cpp thank you for the great library julián
| 0
|
9,668
| 12,675,657,850
|
IssuesEvent
|
2020-06-19 02:27:17
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
closed
|
Create OWNERS files for each component
|
C-process T-task
|
Follow-up from #72: I think it would be useful to record in each component tree who the "owners" are of that component. The owners would be the people responsible for reviewing PRs to that component. Google's Bazel build system does this using a file called OWNERS with a line-separated list of usernames.
|
1.0
|
Create OWNERS files for each component - Follow-up from #72: I think it would be useful to record in each component tree who the "owners" are of that component. The owners would be the people responsible for reviewing PRs to that component. Google's Bazel build system does this using a file called OWNERS with a line-separated list of usernames.
|
process
|
create owners files for each component follow up from i think it would be useful to record in each component tree who the owners are of that component the owners would be the people responsible for reviewing prs to that component google s bazel build system does this using a file called owners with a line separated list of usernames
| 1
|
485,238
| 13,963,050,331
|
IssuesEvent
|
2020-10-25 12:29:43
|
noter-org/noter-client
|
https://api.github.com/repos/noter-org/noter-client
|
opened
|
Tag's note listing is unauthed when jumping to that tag's page via url
|
priority:medium
|
Should wait for URL prob?
|
1.0
|
Tag's note listing is unauthed when jumping to that tag's page via url - Should wait for URL prob?
|
non_process
|
tag s note listing is unauthed when jumping to that tag s page via url should wait for url prob
| 0
|
168,084
| 6,361,529,998
|
IssuesEvent
|
2017-07-31 13:08:07
|
zero-os/0-orchestrator
|
https://api.github.com/repos/zero-os/0-orchestrator
|
closed
|
Healthcheck: Network stability
|
priority_major state_verification type_feature
|
Implement in node service monitor action
Tests network between nodes
Make sure all types of network can reach eachother
Ping nodes for 10 times
When less then 90% produce a warning
When less then 70% procede an error
When timings are more then 10ms produce a warning
When timings are more then 200ms produce am error
Futhermore it makes sure that all nics in network have same MTU
Otherwise produces an error message
See https://github.com/0-complexity/selfhealing/blob/master/jumpscripts/healthchecks/networkstability.py
|
1.0
|
Healthcheck: Network stability - Implement in node service monitor action
Tests network between nodes
Make sure all types of network can reach eachother
Ping nodes for 10 times
When less then 90% produce a warning
When less then 70% procede an error
When timings are more then 10ms produce a warning
When timings are more then 200ms produce am error
Futhermore it makes sure that all nics in network have same MTU
Otherwise produces an error message
See https://github.com/0-complexity/selfhealing/blob/master/jumpscripts/healthchecks/networkstability.py
|
non_process
|
healthcheck network stability implement in node service monitor action tests network between nodes make sure all types of network can reach eachother ping nodes for times when less then produce a warning when less then procede an error when timings are more then produce a warning when timings are more then produce am error futhermore it makes sure that all nics in network have same mtu otherwise produces an error message see
| 0
|
97,746
| 16,242,641,428
|
IssuesEvent
|
2021-05-07 11:25:25
|
TIBCOSoftware/TCSTK-Angular
|
https://api.github.com/repos/TIBCOSoftware/TCSTK-Angular
|
closed
|
CVE-2021-27515 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2021-27515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: TCSTK-Angular/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/TCSTK-Angular/commit/d1b6477f436bdf55dbed46ee5ed582741e66dbe7">d1b6477f436bdf55dbed46ee5ed582741e66dbe7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse before 1.5.0 mishandles certain uses of backslash such as http:\/ and interprets the URI as a relative path.
<p>Publish Date: 2021-02-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515>CVE-2021-27515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515</a></p>
<p>Release Date: 2021-02-22</p>
<p>Fix Resolution: 1.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"url-parse","packageVersion":"1.4.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;webpack-dev-server:3.11.0;sockjs-client:1.4.0;url-parse:1.4.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-27515","vulnerabilityDetails":"url-parse before 1.5.0 mishandles certain uses of backslash such as http:\\/ and interprets the URI as a relative path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-27515 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-27515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: TCSTK-Angular/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/TCSTK-Angular/commit/d1b6477f436bdf55dbed46ee5ed582741e66dbe7">d1b6477f436bdf55dbed46ee5ed582741e66dbe7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse before 1.5.0 mishandles certain uses of backslash such as http:\/ and interprets the URI as a relative path.
<p>Publish Date: 2021-02-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515>CVE-2021-27515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515</a></p>
<p>Release Date: 2021-02-22</p>
<p>Fix Resolution: 1.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"url-parse","packageVersion":"1.4.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;webpack-dev-server:3.11.0;sockjs-client:1.4.0;url-parse:1.4.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-27515","vulnerabilityDetails":"url-parse before 1.5.0 mishandles certain uses of backslash such as http:\\/ and interprets the URI as a relative path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file tcstk angular package json path to vulnerable library tcstk angular node modules url parse package json dependency hierarchy build angular tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details url parse before mishandles certain uses of backslash such as http and interprets the uri as a relative path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree angular devkit build angular webpack dev server sockjs client url parse isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails url parse before mishandles certain uses of backslash such as http and interprets the uri as a relative path vulnerabilityurl
| 0
|
145,915
| 11,714,121,334
|
IssuesEvent
|
2020-03-09 11:40:59
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
opened
|
Upgrade tests should cover 'external-end-to-end' scenario
|
area/eventing test-missing
|
in continuation to https://github.com/kyma-project/kyma/issues/7277
After Completing the Upgrade process, we should run the core-test-external-solution scenario test to check the sanity of the upgraded system.
**AC**
- Prepare resources before upgrade, for example pairing an Application, Creating Service Instances in a namespace followed by Creating Knative-Triggers pointing to a subscriber.
- After Upgrading Kyma, Fire an Event to the above Application's Eventing Endpoints (/v1/events and /events)
- Successful Validation of the Event Delivery from the subscriber.
- This Upgrade test should be part of all the upgrade tests in the prow jobs (eg: GKE Upgrade)
|
1.0
|
Upgrade tests should cover 'external-end-to-end' scenario - in continuation to https://github.com/kyma-project/kyma/issues/7277
After Completing the Upgrade process, we should run the core-test-external-solution scenario test to check the sanity of the upgraded system.
**AC**
- Prepare resources before upgrade, for example pairing an Application, Creating Service Instances in a namespace followed by Creating Knative-Triggers pointing to a subscriber.
- After Upgrading Kyma, Fire an Event to the above Application's Eventing Endpoints (/v1/events and /events)
- Successful Validation of the Event Delivery from the subscriber.
- This Upgrade test should be part of all the upgrade tests in the prow jobs (eg: GKE Upgrade)
|
non_process
|
upgrade tests should cover external end to end scenario in continuation to after completing the upgrade process we should run the core test external solution scenario test to check the sanity of the upgraded system ac prepare resources before upgrade for example pairing an application creating service instances in a namespace followed by creating knative triggers pointing to a subscriber after upgrading kyma fire an event to the above application s eventing endpoints events and events successful validation of the event delivery from the subscriber this upgrade test should be part of all the upgrade tests in the prow jobs eg gke upgrade
| 0
|
8,474
| 11,642,978,308
|
IssuesEvent
|
2020-02-29 10:36:36
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
opened
|
UCP: Migrate `GreatestInt` (Vectorize) from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the function GreatestInt from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Functions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate `GreatestInt` (Vectorize) from TiDB -
## Description
Port the function GreatestInt from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Functions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate greatestint vectorize from tidb description port the function greatestint from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials functions ported from tidb
| 1
|
808,813
| 30,112,155,576
|
IssuesEvent
|
2023-06-30 08:38:07
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Languages missing from language dropdown
|
bug Priority: High
|
Some languages are already translated mostly, but do not appear in the dropdown menu for languages. Please add the following:
* Arabic - عربي
* Norwegian - Bokmål
* Swedish - Svenska

|
1.0
|
Languages missing from language dropdown - Some languages are already translated mostly, but do not appear in the dropdown menu for languages. Please add the following:
* Arabic - عربي
* Norwegian - Bokmål
* Swedish - Svenska

|
non_process
|
languages missing from language dropdown some languages are already translated mostly but do not appear in the dropdown menu for languages please add the following arabic عربي norwegian bokmål swedish svenska
| 0
|
4,120
| 7,060,620,710
|
IssuesEvent
|
2018-01-05 09:37:15
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
Registreren met welkomst bericht in app op tel
|
priority_minor process_duplicate state_verification type_feature
|
Deze knop werkt maar een keer, dus als iemand daar op klikt om zich te registreren als handelaar maar er op dat moment niet mee door gaat kan hij deze knop op een later tijdstip niet meer gebruiken.
|
1.0
|
Registreren met welkomst bericht in app op tel - Deze knop werkt maar een keer, dus als iemand daar op klikt om zich te registreren als handelaar maar er op dat moment niet mee door gaat kan hij deze knop op een later tijdstip niet meer gebruiken.
|
process
|
registreren met welkomst bericht in app op tel deze knop werkt maar een keer dus als iemand daar op klikt om zich te registreren als handelaar maar er op dat moment niet mee door gaat kan hij deze knop op een later tijdstip niet meer gebruiken
| 1
|
302,749
| 22,841,092,738
|
IssuesEvent
|
2022-07-12 21:59:55
|
instill-ai/vdp
|
https://api.github.com/repos/instill-ai/vdp
|
closed
|
[doc]: There has no make build command
|
documentation
|
## Issue
- According to the quick start of the repo: if you want to develop locally, you could do the following
```
$ git clone https://github.com/instill-ai/vdp.git && cd vdp
# Build instill/vdp:dev local development image
$ make build
# Launch all services.
$ make all
```
But actually, there has no make build command in the Makefile
https://github.com/instill-ai/vdp/blob/main/Makefile#L15
Only exist make dev and make all.
## Solution
Please update the readme quick start guideline.
|
1.0
|
[doc]: There has no make build command - ## Issue
- According to the quick start of the repo: if you want to develop locally, you could do the following
```
$ git clone https://github.com/instill-ai/vdp.git && cd vdp
# Build instill/vdp:dev local development image
$ make build
# Launch all services.
$ make all
```
But actually, there has no make build command in the Makefile
https://github.com/instill-ai/vdp/blob/main/Makefile#L15
Only exist make dev and make all.
## Solution
Please update the readme quick start guideline.
|
non_process
|
there has no make build command issue according to the quick start of the repo if you want to develop locally you could do the following git clone cd vdp build instill vdp dev local development image make build launch all services make all but actually there has no make build command in the makefile only exist make dev and make all solution please update the readme quick start guideline
| 0
|
24,222
| 12,056,079,850
|
IssuesEvent
|
2020-04-15 13:57:29
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Update DS Web Acceptance User Group
|
Need: 2-Should Have Product: AMANDA Project: ATD AMANDA Backlog Project: ROW Wishlist Provider: CTM Service: Apps Type: Enhancement Workgroup: ROW
|
Right notification for all DS folder updates are automatically sent to ABC Trans list, but they should be sent to ROW Intake Group.
Since these updates need to be manually identified and updated, it's easy for them to get overlooked for a while. We don't know who oversees the ABC Trans list, but could be annoying for them, too.
**The need to address this bug will increase if applications move fully online.**
*Migrated from [atd-amanda #202](https://github.com/cityofaustin/atd-amanda/issues/202)*
|
1.0
|
Update DS Web Acceptance User Group - Right notification for all DS folder updates are automatically sent to ABC Trans list, but they should be sent to ROW Intake Group.
Since these updates need to be manually identified and updated, it's easy for them to get overlooked for a while. We don't know who oversees the ABC Trans list, but could be annoying for them, too.
**The need to address this bug will increase if applications move fully online.**
*Migrated from [atd-amanda #202](https://github.com/cityofaustin/atd-amanda/issues/202)*
|
non_process
|
update ds web acceptance user group right notification for all ds folder updates are automatically sent to abc trans list but they should be sent to row intake group since these updates need to be manually identified and updated it s easy for them to get overlooked for a while we don t know who oversees the abc trans list but could be annoying for them too the need to address this bug will increase if applications move fully online migrated from
| 0
|
21,888
| 30,339,278,819
|
IssuesEvent
|
2023-07-11 11:41:30
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
6809 decompiler: overlapping input varnodes with LSLA/B output for indexed JMP
|
Type: Bug Feature: Processor/MC6800 Status: Internal
|
**Describe the bug**
Decompiling the function shown in the screenshot results in `Low-level Error: Overlapping input varnodes`. Changing the LSLA instruction to INCA fixes decompilation, as well as changing 6x09.sinc to
```
macro logicalShiftLeft(op)
{
$(C) = op >> 7;
--- op = op << 1;
+++ op = op + 1;
$(Z) = (op == 0);
$(N) = (op >> 7);
}
```


|
1.0
|
6809 decompiler: overlapping input varnodes with LSLA/B output for indexed JMP - **Describe the bug**
Decompiling the function shown in the screenshot results in `Low-level Error: Overlapping input varnodes`. Changing the LSLA instruction to INCA fixes decompilation, as well as changing 6x09.sinc to
```
macro logicalShiftLeft(op)
{
$(C) = op >> 7;
--- op = op << 1;
+++ op = op + 1;
$(Z) = (op == 0);
$(N) = (op >> 7);
}
```


|
process
|
decompiler overlapping input varnodes with lsla b output for indexed jmp describe the bug decompiling the function shown in the screenshot results in low level error overlapping input varnodes changing the lsla instruction to inca fixes decompilation as well as changing sinc to macro logicalshiftleft op c op op op op op z op n op
| 1
|
20,496
| 27,154,344,561
|
IssuesEvent
|
2023-02-17 06:01:28
|
GoogleContainerTools/kpt
|
https://api.github.com/repos/GoogleContainerTools/kpt
|
closed
|
Update kpt roadmaps
|
enhancement process p0 triaged area/site
|
I gutted the kpt roadmap: https://github.com/GoogleContainerTools/kpt/blob/main/docs/ROADMAP.md
We'll need to rebuild it once we get user feedback on package orchestration.
The porch roadmap needs to be prioritized:
https://github.com/GoogleContainerTools/kpt/blob/main/porch/docs/porch-roadmap.md
We will need to create a UI roadmap. For starters, there are quite a few features from the prototype UI that we don't have yet: https://www.youtube.com/watch?v=d_iV22_6nAM
The roadmap should be driven by UX, for specific scenarios/CUJs: CUJ UX drives UI drives porch drives kpt.
|
1.0
|
Update kpt roadmaps - I gutted the kpt roadmap: https://github.com/GoogleContainerTools/kpt/blob/main/docs/ROADMAP.md
We'll need to rebuild it once we get user feedback on package orchestration.
The porch roadmap needs to be prioritized:
https://github.com/GoogleContainerTools/kpt/blob/main/porch/docs/porch-roadmap.md
We will need to create a UI roadmap. For starters, there are quite a few features from the prototype UI that we don't have yet: https://www.youtube.com/watch?v=d_iV22_6nAM
The roadmap should be driven by UX, for specific scenarios/CUJs: CUJ UX drives UI drives porch drives kpt.
|
process
|
update kpt roadmaps i gutted the kpt roadmap we ll need to rebuild it once we get user feedback on package orchestration the porch roadmap needs to be prioritized we will need to create a ui roadmap for starters there are quite a few features from the prototype ui that we don t have yet the roadmap should be driven by ux for specific scenarios cujs cuj ux drives ui drives porch drives kpt
| 1
|
8,687
| 11,826,354,915
|
IssuesEvent
|
2020-03-21 17:27:55
|
Cacti/cacti
|
https://api.github.com/repos/Cacti/cacti
|
closed
|
Installing cacti 1.2.10 in debian and ubuntu stopped at 41%
|
bug confirmed installer process resolved
|
i am new to installing cacti, but i have issue when instaling it keep looping and stay 41 %

|
1.0
|
Installing cacti 1.2.10 in debian and ubuntu stopped at 41% - i am new to installing cacti, but i have issue when instaling it keep looping and stay 41 %

|
process
|
installing cacti in debian and ubuntu stopped at i am new to installing cacti but i have issue when instaling it keep looping and stay
| 1
|
24,722
| 12,145,610,261
|
IssuesEvent
|
2020-04-24 09:37:33
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
opened
|
Helm-broker support for k8s 1.18
|
area/service-catalog
|
<!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
Helm-broker seems to not work with k8s1.18.
When configuring an addon, the status gets green but the addons will not get listed in the service-catalog. Seeing the logs of the helm-broker controller indicates a problem with K.1.18
```
{"level":"warning","log":{"message":"Creation of namespaced-broker for namespace [mocks] results in error: [ServiceBroker.servicecatalog.k8s.io \"helm-broker\" is invalid: metadata.managedFields.fieldsType: Invalid value: \"\": must be `FieldsV1`]. AlreadyExist errors will be ignored.","service":"broker-facade","time":"2020-04-24T09:15:33.538Z"}}
```
<!-- Provide a clear and concise description of the feature. -->
Most probably the go client needs to be updated in the service-catalog, see also https://github.com/kubedb/issues/issues/739
**Reasons**
Always try to support latest version
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
|
1.0
|
Helm-broker support for k8s 1.18 - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
Helm-broker seems to not work with k8s1.18.
When configuring an addon, the status gets green but the addons will not get listed in the service-catalog. Seeing the logs of the helm-broker controller indicates a problem with K.1.18
```
{"level":"warning","log":{"message":"Creation of namespaced-broker for namespace [mocks] results in error: [ServiceBroker.servicecatalog.k8s.io \"helm-broker\" is invalid: metadata.managedFields.fieldsType: Invalid value: \"\": must be `FieldsV1`]. AlreadyExist errors will be ignored.","service":"broker-facade","time":"2020-04-24T09:15:33.538Z"}}
```
<!-- Provide a clear and concise description of the feature. -->
Most probably the go client needs to be updated in the service-catalog, see also https://github.com/kubedb/issues/issues/739
**Reasons**
Always try to support latest version
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
|
non_process
|
helm broker support for thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description helm broker seems to not work with when configuring an addon the status gets green but the addons will not get listed in the service catalog seeing the logs of the helm broker controller indicates a problem with k level warning log message creation of namespaced broker for namespace results in error alreadyexist errors will be ignored service broker facade time most probably the go client needs to be updated in the service catalog see also reasons always try to support latest version attachments
| 0
|
11,316
| 14,136,114,451
|
IssuesEvent
|
2020-11-10 03:25:51
|
googleapis/ruby-spanner-activerecord
|
https://api.github.com/repos/googleapis/ruby-spanner-activerecord
|
opened
|
Create GitHub Actions config for running presubmit tests
|
priority: p2 type: process
|
Currently we don't have any meaningful presubmits that run when attempting to merge PRs. It would be nice if we could use GitHub Actions to run the unit and acceptance tests.
|
1.0
|
Create GitHub Actions config for running presubmit tests - Currently we don't have any meaningful presubmits that run when attempting to merge PRs. It would be nice if we could use GitHub Actions to run the unit and acceptance tests.
|
process
|
create github actions config for running presubmit tests currently we don t have any meaningful presubmits that run when attempting to merge prs it would be nice if we could use github actions to run the unit and acceptance tests
| 1
|
215,830
| 24,196,529,365
|
IssuesEvent
|
2022-09-24 01:13:13
|
AkshayMukkavilli/Tensorflow
|
https://api.github.com/repos/AkshayMukkavilli/Tensorflow
|
opened
|
CVE-2022-35988 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2022-35988 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.linalg.matrix_rank` receives an empty input `a`, the GPU kernel gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c55b476aa0e0bd4ee99d0f3ad18d9d706cd1260a. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35988>CVE-2022-35988</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9vqj-64pv-w55c">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9vqj-64pv-w55c</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-35988 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-35988 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.linalg.matrix_rank` receives an empty input `a`, the GPU kernel gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c55b476aa0e0bd4ee99d0f3ad18d9d706cd1260a. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35988>CVE-2022-35988</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9vqj-64pv-w55c">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9vqj-64pv-w55c</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file tensorflow src requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning when tf linalg matrix rank receives an empty input a the gpu kernel gives a check fail that can be used to trigger a denial of service attack we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
| 0
|
16,277
| 20,884,553,920
|
IssuesEvent
|
2022-03-23 02:34:50
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add The Stapler
|
suggested title in process
|
Please add as much of the following info as you can:
Title: The Stapler
Type (film/tv show): Film
Film or show in which it appears: South Park (https://www.imdb.com/title/tt0705968/ Season 06 Episode 15)
Is the parent film/show streaming anywhere? Amazon Prime in the UK
About when in the parent film/show does it appear? 03 minutes 16 seconds
Actual footage of the film/show can be seen (yes/no)? Yes. https://www.youtube.com/watch?v=iS9H9WPPDS8
|
1.0
|
Add The Stapler - Please add as much of the following info as you can:
Title: The Stapler
Type (film/tv show): Film
Film or show in which it appears: South Park (https://www.imdb.com/title/tt0705968/ Season 06 Episode 15)
Is the parent film/show streaming anywhere? Amazon Prime in the UK
About when in the parent film/show does it appear? 03 minutes 16 seconds
Actual footage of the film/show can be seen (yes/no)? Yes. https://www.youtube.com/watch?v=iS9H9WPPDS8
|
process
|
add the stapler please add as much of the following info as you can title the stapler type film tv show film film or show in which it appears south park season episode is the parent film show streaming anywhere amazon prime in the uk about when in the parent film show does it appear minutes seconds actual footage of the film show can be seen yes no yes
| 1
|
16,631
| 21,704,579,141
|
IssuesEvent
|
2022-05-10 08:28:28
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Refactor ProtocolFactory to ensure that it uses a fixed seed.
|
kind/toil team/process-automation area/test area/maintainability
|
**Description**
Currently, every `ProtocolFactory` instance is created, by default, with a new seed. This means every factory produces different records. For development, it's important that our tests are reproducible. To that end, we should ensure all factories use the same seed by default.
For cases where you want different records, we should still be able to pass a seed ourselves.
|
1.0
|
Refactor ProtocolFactory to ensure that it uses a fixed seed. - **Description**
Currently, every `ProtocolFactory` instance is created, by default, with a new seed. This means every factory produces different records. For development, it's important that our tests are reproducible. To that end, we should ensure all factories use the same seed by default.
For cases where you want different records, we should still be able to pass a seed ourselves.
|
process
|
refactor protocolfactory to ensure that it uses a fixed seed description currently every protocolfactory instance is created by default with a new seed this means every factory produces different records for development it s important that our tests are reproducible to that end we should ensure all factories use the same seed by default for cases where you want different records we should still be able to pass a seed ourselves
| 1
|
15,623
| 19,770,011,358
|
IssuesEvent
|
2022-01-17 09:05:16
|
googleapis/ruby-cloud-env
|
https://api.github.com/repos/googleapis/ruby-cloud-env
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'release_level' in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'release_level' in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property release level in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
816,449
| 30,599,448,387
|
IssuesEvent
|
2023-07-22 07:05:08
|
wasmerio/wasmer
|
https://api.github.com/repos/wasmerio/wasmer
|
closed
|
Support for s390x/systemz for wasmer
|
ℹ️ help wanted ❓ question 🏚 stale priority-low
|
Looking for timeline to support systemz/s390x.
I'm trying to dig around this for couple of days.
I'm able to compile the 'build-capi' and 'build-wasmer' now.
1. Need your help in understanding what else is required to test the build.
2. And how do I integrate this with wasmvm.
Thanks.
|
1.0
|
Support for s390x/systemz for wasmer - Looking for timeline to support systemz/s390x.
I'm trying to dig around this for couple of days.
I'm able to compile the 'build-capi' and 'build-wasmer' now.
1. Need your help in understanding what else is required to test the build.
2. And how do I integrate this with wasmvm.
Thanks.
|
non_process
|
support for systemz for wasmer looking for timeline to support systemz i m trying to dig around this for couple of days i m able to compile the build capi and build wasmer now need your help in understanding what else is required to test the build and how do i integrate this with wasmvm thanks
| 0
|
21,776
| 30,289,582,630
|
IssuesEvent
|
2023-07-09 05:20:01
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Unable to see exemplar data from span metrics processor when exporter is set to prometheus
|
bug Stale processor/spanmetrics exporter/prometheus closed as inactive
|
### Component(s)
exporter/prometheus, processor/spanmetrics
### What happened?
Hi Team,
I'm trying to generate metrics from span using span metrics processor which I'm able to successfully generate. The exporter is set to prometheus for the metrics generated out of span metrics processor.
I'm using helm charts to deploy opentelemetry collector in kubernetes and following is my configuration
```
mode: "deployment"
replicaCount: 1
nameOverride: otel-collector
fullnameOverride: otel-collector
# Base collector configuration.
config:
exporters:
otlp:
endpoint: otel-collector-grpc:4317
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 1440m
enable_open_metrics: true
extensions:
health_check: {}
pprof:
endpoint: :1888
zpages:
endpoint: :55679
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
batch: {}
tail_sampling:
policies:
- name: drop_noisy_traces_url
type: string_attribute
string_attribute:
key: http.target
values:
- \/health
- \/ping
enabled_regex_matching: true
invert_match: true
spanmetrics:
metrics_exporter: prometheus
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
receivers:
jaeger: null
prometheus: null
zipkin: null
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
service:
extensions:
- pprof
- zpages
- health_check
pipelines:
metrics:
exporters:
- prometheus
processors:
- memory_limiter
- batch
receivers:
- otlp/spanmetrics
traces:
exporters:
- otlp
processors:
- memory_limiter
- batch
- tail_sampling
- spanmetrics
receivers:
- otlp
# Configuration for ports
ports:
otlp:
enabled: true
containerPort: 4317
servicePort: 4317
hostPort: 4317
protocol: TCP
otlp-http:
enabled: true
containerPort: 4318
servicePort: 4318
hostPort: 4318
protocol: TCP
jaeger-thrift:
enabled: true
containerPort: 14268
servicePort: 14268
hostPort: 14268
protocol: TCP
jaeger-grpc:
enabled: true
containerPort: 14250
servicePort: 14250
hostPort: 14250
protocol: TCP
metrics:
enabled: true
containerPort: 8889
servicePort: 8889
protocol: TCP
healthcheck:
enabled: true
containerPort: 13133
servicePort: 13133
protocol: TCP
zpages:
enabled: true
containerPort: 55679
servicePort: 55679
protocol: TCP
pprof:
enabled: true
containerPort: 1888
servicePort: 1888
protocol: TCP
# Resource limits & requests. Update according to your own use case as these values might be too low for a typical deployment.
resources:
limits:
cpu: 256m
memory: 512Mi
service:
type: NodePort
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /
```
Following is an example metric which I see in locahost:8889/metrics and it doesn't have any exemplar data. Do I need to change any configuration in prometheus exporter or span metrics processor for me to see the exemplars data in prometheus exporter?
```
latency_bucket{http_method="GET",http_status_code="200",http_target="<http_target>",operation="<operation name>",service_name="<operation_name>",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET",le="15000"} 1
```
Can someone please help me?
### Collector version
0.67.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
```yaml
mode: "deployment"
replicaCount: 1
nameOverride: otel-collector
fullnameOverride: otel-collector
# Base collector configuration.
config:
exporters:
otlp:
endpoint: otel-collector-grpc:4317
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 1440m
enable_open_metrics: true
extensions:
health_check: {}
pprof:
endpoint: :1888
zpages:
endpoint: :55679
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
batch: {}
tail_sampling:
policies:
- name: drop_noisy_traces_url
type: string_attribute
string_attribute:
key: http.target
values:
- \/health
- \/ping
enabled_regex_matching: true
invert_match: true
spanmetrics:
metrics_exporter: prometheus
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
receivers:
jaeger: null
prometheus: null
zipkin: null
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
service:
extensions:
- pprof
- zpages
- health_check
pipelines:
metrics:
exporters:
- prometheus
processors:
- memory_limiter
- batch
receivers:
- otlp/spanmetrics
traces:
exporters:
- otlp
processors:
- memory_limiter
- batch
- tail_sampling
- spanmetrics
receivers:
- otlp
# Configuration for ports
ports:
otlp:
enabled: true
containerPort: 4317
servicePort: 4317
hostPort: 4317
protocol: TCP
otlp-http:
enabled: true
containerPort: 4318
servicePort: 4318
hostPort: 4318
protocol: TCP
jaeger-thrift:
enabled: true
containerPort: 14268
servicePort: 14268
hostPort: 14268
protocol: TCP
jaeger-grpc:
enabled: true
containerPort: 14250
servicePort: 14250
hostPort: 14250
protocol: TCP
metrics:
enabled: true
containerPort: 8889
servicePort: 8889
protocol: TCP
healthcheck:
enabled: true
containerPort: 13133
servicePort: 13133
protocol: TCP
zpages:
enabled: true
containerPort: 55679
servicePort: 55679
protocol: TCP
pprof:
enabled: true
containerPort: 1888
servicePort: 1888
protocol: TCP
# Resource limits & requests. Update according to your own use case as these values might be too low for a typical deployment.
resources:
limits:
cpu: 256m
memory: 512Mi
service:
type: NodePort
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
Unable to see exemplar data from span metrics processor when exporter is set to prometheus - ### Component(s)
exporter/prometheus, processor/spanmetrics
### What happened?
Hi Team,
I'm trying to generate metrics from span using span metrics processor which I'm able to successfully generate. The exporter is set to prometheus for the metrics generated out of span metrics processor.
I'm using helm charts to deploy opentelemetry collector in kubernetes and following is my configuration
```
mode: "deployment"
replicaCount: 1
nameOverride: otel-collector
fullnameOverride: otel-collector
# Base collector configuration.
config:
exporters:
otlp:
endpoint: otel-collector-grpc:4317
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 1440m
enable_open_metrics: true
extensions:
health_check: {}
pprof:
endpoint: :1888
zpages:
endpoint: :55679
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
batch: {}
tail_sampling:
policies:
- name: drop_noisy_traces_url
type: string_attribute
string_attribute:
key: http.target
values:
- \/health
- \/ping
enabled_regex_matching: true
invert_match: true
spanmetrics:
metrics_exporter: prometheus
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
receivers:
jaeger: null
prometheus: null
zipkin: null
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
service:
extensions:
- pprof
- zpages
- health_check
pipelines:
metrics:
exporters:
- prometheus
processors:
- memory_limiter
- batch
receivers:
- otlp/spanmetrics
traces:
exporters:
- otlp
processors:
- memory_limiter
- batch
- tail_sampling
- spanmetrics
receivers:
- otlp
# Configuration for ports
ports:
otlp:
enabled: true
containerPort: 4317
servicePort: 4317
hostPort: 4317
protocol: TCP
otlp-http:
enabled: true
containerPort: 4318
servicePort: 4318
hostPort: 4318
protocol: TCP
jaeger-thrift:
enabled: true
containerPort: 14268
servicePort: 14268
hostPort: 14268
protocol: TCP
jaeger-grpc:
enabled: true
containerPort: 14250
servicePort: 14250
hostPort: 14250
protocol: TCP
metrics:
enabled: true
containerPort: 8889
servicePort: 8889
protocol: TCP
healthcheck:
enabled: true
containerPort: 13133
servicePort: 13133
protocol: TCP
zpages:
enabled: true
containerPort: 55679
servicePort: 55679
protocol: TCP
pprof:
enabled: true
containerPort: 1888
servicePort: 1888
protocol: TCP
# Resource limits & requests. Update according to your own use case as these values might be too low for a typical deployment.
resources:
limits:
cpu: 256m
memory: 512Mi
service:
type: NodePort
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /
```
Following is an example metric which I see in locahost:8889/metrics and it doesn't have any exemplar data. Do I need to change any configuration in prometheus exporter or span metrics processor for me to see the exemplars data in prometheus exporter?
```
latency_bucket{http_method="GET",http_status_code="200",http_target="<http_target>",operation="<operation name>",service_name="<operation_name>",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET",le="15000"} 1
```
Can someone please help me?
### Collector version
0.67.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
```yaml
mode: "deployment"
replicaCount: 1
nameOverride: otel-collector
fullnameOverride: otel-collector
# Base collector configuration.
config:
exporters:
otlp:
endpoint: otel-collector-grpc:4317
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 1440m
enable_open_metrics: true
extensions:
health_check: {}
pprof:
endpoint: :1888
zpages:
endpoint: :55679
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
batch: {}
tail_sampling:
policies:
- name: drop_noisy_traces_url
type: string_attribute
string_attribute:
key: http.target
values:
- \/health
- \/ping
enabled_regex_matching: true
invert_match: true
spanmetrics:
metrics_exporter: prometheus
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
receivers:
jaeger: null
prometheus: null
zipkin: null
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
service:
extensions:
- pprof
- zpages
- health_check
pipelines:
metrics:
exporters:
- prometheus
processors:
- memory_limiter
- batch
receivers:
- otlp/spanmetrics
traces:
exporters:
- otlp
processors:
- memory_limiter
- batch
- tail_sampling
- spanmetrics
receivers:
- otlp
# Configuration for ports
ports:
otlp:
enabled: true
containerPort: 4317
servicePort: 4317
hostPort: 4317
protocol: TCP
otlp-http:
enabled: true
containerPort: 4318
servicePort: 4318
hostPort: 4318
protocol: TCP
jaeger-thrift:
enabled: true
containerPort: 14268
servicePort: 14268
hostPort: 14268
protocol: TCP
jaeger-grpc:
enabled: true
containerPort: 14250
servicePort: 14250
hostPort: 14250
protocol: TCP
metrics:
enabled: true
containerPort: 8889
servicePort: 8889
protocol: TCP
healthcheck:
enabled: true
containerPort: 13133
servicePort: 13133
protocol: TCP
zpages:
enabled: true
containerPort: 55679
servicePort: 55679
protocol: TCP
pprof:
enabled: true
containerPort: 1888
servicePort: 1888
protocol: TCP
# Resource limits & requests. Update according to your own use case as these values might be too low for a typical deployment.
resources:
limits:
cpu: 256m
memory: 512Mi
service:
type: NodePort
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /
```
### Log output
_No response_
### Additional context
_No response_
|
process
|
unable to see exemplar data from span metrics processor when exporter is set to prometheus component s exporter prometheus processor spanmetrics what happened hi team i m trying to generate metrics from span using span metrics processor which i m able to successfully generate the exporter is set to prometheus for the metrics generated out of span metrics processor i m using helm charts to deploy opentelemetry collector in kubernetes and following is my configuration mode deployment replicacount nameoverride otel collector fullnameoverride otel collector base collector configuration config exporters otlp endpoint otel collector grpc tls insecure true prometheus endpoint metric expiration enable open metrics true extensions health check pprof endpoint zpages endpoint processors memory limiter check interval limit mib spike limit mib batch tail sampling policies name drop noisy traces url type string attribute string attribute key http target values health ping enabled regex matching true invert match true spanmetrics metrics exporter prometheus dimensions name http method name http status code name http target dimensions cache size aggregation temporality aggregation temporality cumulative receivers jaeger null prometheus null zipkin null otlp protocols http endpoint otlp spanmetrics protocols grpc endpoint service extensions pprof zpages health check pipelines metrics exporters prometheus processors memory limiter batch receivers otlp spanmetrics traces exporters otlp processors memory limiter batch tail sampling spanmetrics receivers otlp configuration for ports ports otlp enabled true containerport serviceport hostport protocol tcp otlp http enabled true containerport serviceport hostport protocol tcp jaeger thrift enabled true containerport serviceport hostport protocol tcp jaeger grpc enabled true containerport serviceport hostport protocol tcp metrics enabled true containerport serviceport protocol tcp healthcheck enabled true containerport serviceport protocol tcp zpages enabled true containerport serviceport protocol tcp pprof enabled true containerport serviceport protocol tcp resource limits requests update according to your own use case as these values might be too low for a typical deployment resources limits cpu memory service type nodeport annotations alb ingress kubernetes io healthcheck path following is an example metric which i see in locahost metrics and it doesn t have any exemplar data do i need to change any configuration in prometheus exporter or span metrics processor for me to see the exemplars data in prometheus exporter latency bucket http method get http status code http target operation service name span kind span kind server status code status code unset le can someone please help me collector version environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration yaml mode deployment replicacount nameoverride otel collector fullnameoverride otel collector base collector configuration config exporters otlp endpoint otel collector grpc tls insecure true prometheus endpoint metric expiration enable open metrics true extensions health check pprof endpoint zpages endpoint processors memory limiter check interval limit mib spike limit mib batch tail sampling policies name drop noisy traces url type string attribute string attribute key http target values health ping enabled regex matching true invert match true spanmetrics metrics exporter prometheus dimensions name http method name http status code name http target dimensions cache size aggregation temporality aggregation temporality cumulative receivers jaeger null prometheus null zipkin null otlp protocols http endpoint otlp spanmetrics protocols grpc endpoint service extensions pprof zpages health check pipelines metrics exporters prometheus processors memory limiter batch receivers otlp spanmetrics traces exporters otlp processors memory limiter batch tail sampling spanmetrics receivers otlp configuration for ports ports otlp enabled true containerport serviceport hostport protocol tcp otlp http enabled true containerport serviceport hostport protocol tcp jaeger thrift enabled true containerport serviceport hostport protocol tcp jaeger grpc enabled true containerport serviceport hostport protocol tcp metrics enabled true containerport serviceport protocol tcp healthcheck enabled true containerport serviceport protocol tcp zpages enabled true containerport serviceport protocol tcp pprof enabled true containerport serviceport protocol tcp resource limits requests update according to your own use case as these values might be too low for a typical deployment resources limits cpu memory service type nodeport annotations alb ingress kubernetes io healthcheck path log output no response additional context no response
| 1
|
47,871
| 2,986,776,334
|
IssuesEvent
|
2015-07-20 07:38:06
|
HubTurbo/HubTurbo
|
https://api.github.com/repos/HubTurbo/HubTurbo
|
closed
|
Label picker doesn't work if the bView is not showing the selected issue
|
feature-labels priority.low type.bug
|
As the bView is no longer involved in picking labels, the label picker should work irrespective of what is displayed in the bView?
|
1.0
|
Label picker doesn't work if the bView is not showing the selected issue - As the bView is no longer involved in picking labels, the label picker should work irrespective of what is displayed in the bView?
|
non_process
|
label picker doesn t work if the bview is not showing the selected issue as the bview is no longer involved in picking labels the label picker should work irrespective of what is displayed in the bview
| 0
|
15,861
| 20,035,667,568
|
IssuesEvent
|
2022-02-02 11:36:12
|
SAP/openui5-docs
|
https://api.github.com/repos/SAP/openui5-docs
|
closed
|
Assigning function directly to "renderer" should no longer be encouraged
|
In Process
|
Specifying the renderer without `apiVersion: 2` (e.g. <code>renderer: <em><fn></em></code>) causes falling back to the legacy string-based rendering even with semantic rendering APIs, i.e. no DOM-patching can be performed.
And according to https://github.com/SAP/openui5/issues/2822, there won't be any implicit setting of the flag either.
Topics that mention <code>renderer: <em><fn></em></code> should be all updated; informing that <code>renderer: <em><fn></em></code> should no longer be used and that `apiVersion: 2` should be explicitly set. E.g.:
<div class="highlight highlight-source-js"><del><pre><span class="pl-en">renderer</span><span class="pl-k">:</span> <span class="pl-k">function</span>(oRM, oControl) {<span class="pl-c"><span class="pl-c">/*</span>...<span class="pl-c">*/</span></span>}</pre></del></div>
```js
renderer: {
apiVersion: 2,
render: function(oRM, oControl) {
// ...
}
}
```
|
1.0
|
Assigning function directly to "renderer" should no longer be encouraged - Specifying the renderer without `apiVersion: 2` (e.g. <code>renderer: <em><fn></em></code>) causes falling back to the legacy string-based rendering even with semantic rendering APIs, i.e. no DOM-patching can be performed.
And according to https://github.com/SAP/openui5/issues/2822, there won't be any implicit setting of the flag either.
Topics that mention <code>renderer: <em><fn></em></code> should be all updated; informing that <code>renderer: <em><fn></em></code> should no longer be used and that `apiVersion: 2` should be explicitly set. E.g.:
<div class="highlight highlight-source-js"><del><pre><span class="pl-en">renderer</span><span class="pl-k">:</span> <span class="pl-k">function</span>(oRM, oControl) {<span class="pl-c"><span class="pl-c">/*</span>...<span class="pl-c">*/</span></span>}</pre></del></div>
```js
renderer: {
apiVersion: 2,
render: function(oRM, oControl) {
// ...
}
}
```
|
process
|
assigning function directly to renderer should no longer be encouraged specifying the renderer without apiversion e g renderer lt fn causes falling back to the legacy string based rendering even with semantic rendering apis i e no dom patching can be performed and according to there won t be any implicit setting of the flag either topics that mention renderer lt fn should be all updated informing that renderer lt fn should no longer be used and that apiversion should be explicitly set e g renderer function orm ocontrol js renderer apiversion render function orm ocontrol
| 1
|
17,516
| 23,328,807,934
|
IssuesEvent
|
2022-08-09 01:30:27
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[FLINK-28820][Stream API] Improve the sink performance
|
compute/data-processing type/bug
|
Pulsar Sink writes at a speed of a dozen messages per second when At-Least-Once and Exactly-Once are enabled. When None is used, the message write speed reaches the Pulsar write bottleneck.
|
1.0
|
[FLINK-28820][Stream API] Improve the sink performance - Pulsar Sink writes at a speed of a dozen messages per second when At-Least-Once and Exactly-Once are enabled. When None is used, the message write speed reaches the Pulsar write bottleneck.
|
process
|
improve the sink performance pulsar sink writes at a speed of a dozen messages per second when at least once and exactly once are enabled when none is used the message write speed reaches the pulsar write bottleneck
| 1
|
7,042
| 10,198,599,311
|
IssuesEvent
|
2019-08-13 06:01:05
|
ToucanToco/toucan-data-sdk
|
https://api.github.com/repos/ToucanToco/toucan-data-sdk
|
closed
|
if...then...else :: fill a column based on a condition
|
postprocess
|
The API could look something like this:
```
if_then_else:
if: query string (like in our `query` postprocess)
then: formula (like in our `formula` postprocess) or string or boolean or number
else: formula (like in our `formula` postprocess) or string or boolean or number
new_column: string
```
|
1.0
|
if...then...else :: fill a column based on a condition - The API could look something like this:
```
if_then_else:
if: query string (like in our `query` postprocess)
then: formula (like in our `formula` postprocess) or string or boolean or number
else: formula (like in our `formula` postprocess) or string or boolean or number
new_column: string
```
|
process
|
if then else fill a column based on a condition the api could look something like this if then else if query string like in our query postprocess then formula like in our formula postprocess or string or boolean or number else formula like in our formula postprocess or string or boolean or number new column string
| 1
|
3,190
| 6,259,623,398
|
IssuesEvent
|
2017-07-14 18:30:32
|
PeaceGeeksSociety/salesforce
|
https://api.github.com/repos/PeaceGeeksSociety/salesforce
|
opened
|
Gather stakeholder input for a project
|
Communication Templates Community Processes
|
We would like to be able to gather stakeholder input and filter and send emails to specific stakeholder groups for a project (eg. Services Advisor Pathways).
Groups would include:
- Our established committees (add as staff?)
- Project Advisory/Steering Committee
- Immigrant and Refugee Committee
- Frontline Users/Workers Committee
- Other stakeholders not in our established committees
- Immigrants/refugees
- Groups and organizations that work with immigrants/refugees
- Contacts interested in SA/immigrant, refugee issues
This is so we can easily communicate with stakeholders relevant to a specific project.
Done when: can filter and send emails to specific stakeholder groups.
|
1.0
|
Gather stakeholder input for a project - We would like to be able to gather stakeholder input and filter and send emails to specific stakeholder groups for a project (eg. Services Advisor Pathways).
Groups would include:
- Our established committees (add as staff?)
- Project Advisory/Steering Committee
- Immigrant and Refugee Committee
- Frontline Users/Workers Committee
- Other stakeholders not in our established committees
- Immigrants/refugees
- Groups and organizations that work with immigrants/refugees
- Contacts interested in SA/immigrant, refugee issues
This is so we can easily communicate with stakeholders relevant to a specific project.
Done when: can filter and send emails to specific stakeholder groups.
|
process
|
gather stakeholder input for a project we would like to be able to gather stakeholder input and filter and send emails to specific stakeholder groups for a project eg services advisor pathways groups would include our established committees add as staff project advisory steering committee immigrant and refugee committee frontline users workers committee other stakeholders not in our established committees immigrants refugees groups and organizations that work with immigrants refugees contacts interested in sa immigrant refugee issues this is so we can easily communicate with stakeholders relevant to a specific project done when can filter and send emails to specific stakeholder groups
| 1
|
85,038
| 10,583,440,291
|
IssuesEvent
|
2019-10-08 13:43:35
|
ISISScientificComputing/autoreduce
|
https://api.github.com/repos/ISISScientificComputing/autoreduce
|
closed
|
RESTful API for database
|
👤 Developer Requirement 🔑 Database 🔑 Design 🔑 Queues 🔑 WebApp
|
Issue raised by: developer
### What?
It would be nice to have a RESTful API for autoreduction.
Currently we have two places where the database is modified, in the web app when a job is reran and in the queue processor. This doesn't make much sense because we have to have a copy of the models in the queue processor, as well as the messiness of having two places do the same thing. An API will provide a layer of abstraction, with everything in the database being updated in a single place.
For example, when submitting a job you'd just have to make a HTTP request to `/api/submit` with the instrument and run number in request body. The API would then handle creation of the reduction run in the database and adding the reduction run to a queue to be reduced. Then once the reduction is complete another call is made to the API.
Having an API would allow us to use any front end framework, such as making autoreduction a Topcat plugin, meaning scientists only have to go to one website.
### How to test the issue is resolved
1. Ensure we have a good comprehensive design for this.
2. Identify areas of the code base that would need to be changed to accommodate the new communication method
3. Implement - probably in smaller PRs/issues
4. Test manually rigorously
|
1.0
|
RESTful API for database - Issue raised by: developer
### What?
It would be nice to have a RESTful API for autoreduction.
Currently we have two places where the database is modified, in the web app when a job is reran and in the queue processor. This doesn't make much sense because we have to have a copy of the models in the queue processor, as well as the messiness of having two places do the same thing. An API will provide a layer of abstraction, with everything in the database being updated in a single place.
For example, when submitting a job you'd just have to make a HTTP request to `/api/submit` with the instrument and run number in request body. The API would then handle creation of the reduction run in the database and adding the reduction run to a queue to be reduced. Then once the reduction is complete another call is made to the API.
Having an API would allow us to use any front end framework, such as making autoreduction a Topcat plugin, meaning scientists only have to go to one website.
### How to test the issue is resolved
1. Ensure we have a good comprehensive design for this.
2. Identify areas of the code base that would need to be changed to accommodate the new communication method
3. Implement - probably in smaller PRs/issues
4. Test manually rigorously
|
non_process
|
restful api for database issue raised by developer what it would be nice to have a restful api for autoreduction currently we have two places where the database is modified in the web app when a job is reran and in the queue processor this doesn t make much sense because we have to have a copy of the models in the queue processor as well as the messiness of having two places do the same thing an api will provide a layer of abstraction with everything in the database being updated in a single place for example when submitting a job you d just have to make a http request to api submit with the instrument and run number in request body the api would then handle creation of the reduction run in the database and adding the reduction run to a queue to be reduced then once the reduction is complete another call is made to the api having an api would allow us to use any front end framework such as making autoreduction a topcat plugin meaning scientists only have to go to one website how to test the issue is resolved ensure we have a good comprehensive design for this identify areas of the code base that would need to be changed to accommodate the new communication method implement probably in smaller prs issues test manually rigorously
| 0
|
196,992
| 22,572,074,950
|
IssuesEvent
|
2022-06-28 01:52:25
|
Baneeishaque/PropertyFinder-final-v2
|
https://api.github.com/repos/Baneeishaque/PropertyFinder-final-v2
|
closed
|
WS-2017-0421 (High) detected in ws-1.1.5.tgz - autoclosed
|
security vulnerability
|
## WS-2017-0421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-1.1.5.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-1.1.5.tgz">https://registry.npmjs.org/ws/-/ws-1.1.5.tgz</a></p>
<p>Path to dependency file: PropertyFinder-final-v2/package.json</p>
<p>Path to vulnerable library: PropertyFinder-final-v2/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.56.0.tgz (Root Library)
- :x: **ws-1.1.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of ws (0.2.6 through 3.3.0 excluding 0.3.4-2, 0.3.5-2, 0.3.5-3, 0.3.5-4, 1.1.5, 2.0.0-beta.0, 2.0.0-beta.1 and 2.0.0-beta.2) are vulnerable to A specially crafted value of the Sec-WebSocket-Extensions header that used Object.prototype property names as extension or parameter names could be used to make a ws server crash.
<p>Publish Date: 2017-11-08
<p>URL: <a href=https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a>WS-2017-0421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a">https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a</a></p>
<p>Release Date: 2017-11-08</p>
<p>Fix Resolution: ws - 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2017-0421 (High) detected in ws-1.1.5.tgz - autoclosed - ## WS-2017-0421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-1.1.5.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-1.1.5.tgz">https://registry.npmjs.org/ws/-/ws-1.1.5.tgz</a></p>
<p>Path to dependency file: PropertyFinder-final-v2/package.json</p>
<p>Path to vulnerable library: PropertyFinder-final-v2/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.56.0.tgz (Root Library)
- :x: **ws-1.1.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of ws (0.2.6 through 3.3.0 excluding 0.3.4-2, 0.3.5-2, 0.3.5-3, 0.3.5-4, 1.1.5, 2.0.0-beta.0, 2.0.0-beta.1 and 2.0.0-beta.2) are vulnerable to A specially crafted value of the Sec-WebSocket-Extensions header that used Object.prototype property names as extension or parameter names could be used to make a ws server crash.
<p>Publish Date: 2017-11-08
<p>URL: <a href=https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a>WS-2017-0421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a">https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a</a></p>
<p>Release Date: 2017-11-08</p>
<p>Fix Resolution: ws - 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in ws tgz autoclosed ws high severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file propertyfinder final package json path to vulnerable library propertyfinder final node modules ws package json dependency hierarchy react native tgz root library x ws tgz vulnerable library vulnerability details affected version of ws through excluding beta beta and beta are vulnerable to a specially crafted value of the sec websocket extensions header that used object prototype property names as extension or parameter names could be used to make a ws server crash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource
| 0
|
595,511
| 18,067,909,794
|
IssuesEvent
|
2021-09-20 21:31:14
|
googleapis/python-runtimeconfig
|
https://api.github.com/repos/googleapis/python-runtimeconfig
|
closed
|
tests.unit.test_config.TestConfig: test_list_variables_defaults failed
|
api: runtimeconfig type: bug priority: p1 flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a848d542f12c3a2e95bcb7a362e3f980d6c64ba9
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/91ee5699-e1af-4739-a099-91bd4ecc5d01), [Sponge](http://sponge2/91ee5699-e1af-4739-a099-91bd4ecc5d01)
status: failed
<details><summary>Test output</summary><br><pre>self = <unit.test_config.TestConfig testMethod=test_list_variables_defaults>
def test_list_variables_defaults(self):
> import six
E ModuleNotFoundError: No module named 'six'
tests/unit/test_config.py:235: ModuleNotFoundError</pre></details>
|
1.0
|
tests.unit.test_config.TestConfig: test_list_variables_defaults failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a848d542f12c3a2e95bcb7a362e3f980d6c64ba9
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/91ee5699-e1af-4739-a099-91bd4ecc5d01), [Sponge](http://sponge2/91ee5699-e1af-4739-a099-91bd4ecc5d01)
status: failed
<details><summary>Test output</summary><br><pre>self = <unit.test_config.TestConfig testMethod=test_list_variables_defaults>
def test_list_variables_defaults(self):
> import six
E ModuleNotFoundError: No module named 'six'
tests/unit/test_config.py:235: ModuleNotFoundError</pre></details>
|
non_process
|
tests unit test config testconfig test list variables defaults failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self def test list variables defaults self import six e modulenotfounderror no module named six tests unit test config py modulenotfounderror
| 0
|
15,754
| 19,911,753,721
|
IssuesEvent
|
2022-01-25 17:52:08
|
input-output-hk/high-assurance-legacy
|
https://api.github.com/repos/input-output-hk/high-assurance-legacy
|
closed
|
Reduce the boilerplate in the proofs of the basic bisimilarity core laws
|
language: isabelle topic: process calculus type: improvement
|
Currently the proofs of the bisimilarity core laws of the basic transition system are suffering from unnecessary boilerplate. Our goal is to reduce this boilerplate by a more advanced handling of the rules for scoped transitions and the use of “up to” techniques.
|
1.0
|
Reduce the boilerplate in the proofs of the basic bisimilarity core laws - Currently the proofs of the bisimilarity core laws of the basic transition system are suffering from unnecessary boilerplate. Our goal is to reduce this boilerplate by a more advanced handling of the rules for scoped transitions and the use of “up to” techniques.
|
process
|
reduce the boilerplate in the proofs of the basic bisimilarity core laws currently the proofs of the bisimilarity core laws of the basic transition system are suffering from unnecessary boilerplate our goal is to reduce this boilerplate by a more advanced handling of the rules for scoped transitions and the use of “up to” techniques
| 1
|
13,053
| 15,389,366,359
|
IssuesEvent
|
2021-03-03 12:00:22
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
merge or obsolete GO:1902505 , no annotations
|
cell cycle and DNA processes term merge
|
this would only apply to 'within' pathway negative regulation.
No annotations.
GO:1902505 negative regulation of signal transduction involved in mitotic G2 DNA damage checkpoint
|
1.0
|
merge or obsolete GO:1902505 , no annotations -
this would only apply to 'within' pathway negative regulation.
No annotations.
GO:1902505 negative regulation of signal transduction involved in mitotic G2 DNA damage checkpoint
|
process
|
merge or obsolete go no annotations this would only apply to within pathway negative regulation no annotations go negative regulation of signal transduction involved in mitotic dna damage checkpoint
| 1
|
17,996
| 24,012,741,622
|
IssuesEvent
|
2022-09-14 20:26:52
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/k8sattributes] Apply strict regex matching on the full length of labels/annotations
|
help wanted good first issue priority:p3 processor/k8sattributes
|
As discussed in [this thread](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/9525#discussion_r863292338 ) we should apply `key_regex` value to the full length of annotations and labels. For example:
- `key_regex: an*` should not match `annotation1` pod label, but match `annn` label.
- `key_regex: .*` should match any pod label.
|
1.0
|
[processor/k8sattributes] Apply strict regex matching on the full length of labels/annotations - As discussed in [this thread](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/9525#discussion_r863292338 ) we should apply `key_regex` value to the full length of annotations and labels. For example:
- `key_regex: an*` should not match `annotation1` pod label, but match `annn` label.
- `key_regex: .*` should match any pod label.
|
process
|
apply strict regex matching on the full length of labels annotations as discussed in we should apply key regex value to the full length of annotations and labels for example key regex an should not match pod label but match annn label key regex should match any pod label
| 1
|
188,937
| 22,046,952,664
|
IssuesEvent
|
2022-05-30 03:36:07
|
dpteam/RK3188_TABLET
|
https://api.github.com/repos/dpteam/RK3188_TABLET
|
closed
|
CVE-2020-29372 (Medium) detected in linux-yocto-4.12v3.0.66 - autoclosed
|
security vulnerability
|
## CVE-2020-29372 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yocto-4.12v3.0.66</b></p></summary>
<p>
<p>Linux 4.12 Embedded Kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto-4.12>https://git.yoctoproject.org/git/linux-yocto-4.12</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/madvise.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in do_madvise in mm/madvise.c in the Linux kernel before 5.6.8. There is a race condition between coredump operations and the IORING_OP_MADVISE implementation, aka CID-bc0c4d1e176e.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29372>CVE-2020-29372</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29372">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29372</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: v5.7-rc3,v5.6.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-29372 (Medium) detected in linux-yocto-4.12v3.0.66 - autoclosed - ## CVE-2020-29372 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yocto-4.12v3.0.66</b></p></summary>
<p>
<p>Linux 4.12 Embedded Kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto-4.12>https://git.yoctoproject.org/git/linux-yocto-4.12</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/madvise.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in do_madvise in mm/madvise.c in the Linux kernel before 5.6.8. There is a race condition between coredump operations and the IORING_OP_MADVISE implementation, aka CID-bc0c4d1e176e.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29372>CVE-2020-29372</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29372">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29372</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: v5.7-rc3,v5.6.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux yocto autoclosed cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files mm madvise c vulnerability details an issue was discovered in do madvise in mm madvise c in the linux kernel before there is a race condition between coredump operations and the ioring op madvise implementation aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
329,766
| 28,305,909,476
|
IssuesEvent
|
2023-04-10 10:59:30
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
closed
|
Release 4.4.1 - Release Candidate 1 - API integration tests
|
type/test module/api level/task release test/4.4.1
|
The following issue aims to run all [API integration tests](https://github.com/wazuh/wazuh/tree/master/api/test/integration) for the current release candidate, report the results, and open new issues for any encountered errors.
## API integration tests information
| | |
|------------------------------------------|--------------------------------------------|
| **Main release candidate issue** | https://github.com/wazuh/wazuh/issues/16620 |
| **Version** | 4.4.1 |
| **Release candidate #** | RC1 |
| **Tag** | [v4.4.1-rc1](https://github.com/wazuh/wazuh/tree/v4.4.1-rc1) |
| **Previous API integration tests issue** | https://github.com/wazuh/wazuh/issues/16411 |
## Test report procedure
All individual test checks must be marked as:
| | |
|---------------------------------|--------------------------------------------|
| Pass | The test ran successfully. |
| Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. |
| Skip | The test was not run. It must be properly justified and reported in an issue. |
| Fail | The test failed. A new issue must be opened to evaluate and address the problem. |
All test results must have one the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed check. |
| :yellow_circle: | There is at least one expected fail or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue.
Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test.
An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found [here](https://github.com/wazuh/wazuh/issues/16599#issuecomment-1494999985).
All of them ran successfully and there were no fails. I therefore conclude that this issue is ready to be reviewed.
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] @davidjiglesias
- [x] @Selutario
|
2.0
|
Release 4.4.1 - Release Candidate 1 - API integration tests - The following issue aims to run all [API integration tests](https://github.com/wazuh/wazuh/tree/master/api/test/integration) for the current release candidate, report the results, and open new issues for any encountered errors.
## API integration tests information
| | |
|------------------------------------------|--------------------------------------------|
| **Main release candidate issue** | https://github.com/wazuh/wazuh/issues/16620 |
| **Version** | 4.4.1 |
| **Release candidate #** | RC1 |
| **Tag** | [v4.4.1-rc1](https://github.com/wazuh/wazuh/tree/v4.4.1-rc1) |
| **Previous API integration tests issue** | https://github.com/wazuh/wazuh/issues/16411 |
## Test report procedure
All individual test checks must be marked as:
| | |
|---------------------------------|--------------------------------------------|
| Pass | The test ran successfully. |
| Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. |
| Skip | The test was not run. It must be properly justified and reported in an issue. |
| Fail | The test failed. A new issue must be opened to evaluate and address the problem. |
All test results must have one the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed check. |
| :yellow_circle: | There is at least one expected fail or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue.
Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test.
An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found [here](https://github.com/wazuh/wazuh/issues/16599#issuecomment-1494999985).
All of them ran successfully and there were no fails. I therefore conclude that this issue is ready to be reviewed.
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] @davidjiglesias
- [x] @Selutario
|
non_process
|
release release candidate api integration tests the following issue aims to run all for the current release candidate report the results and open new issues for any encountered errors api integration tests information main release candidate issue version release candidate tag previous api integration tests issue test report procedure all individual test checks must be marked as pass the test ran successfully xfail the test was expected to fail and it failed it must be properly justified and reported in an issue skip the test was not run it must be properly justified and reported in an issue fail the test failed a new issue must be opened to evaluate and address the problem all test results must have one the following statuses green circle all checks passed red circle there is at least one failed check yellow circle there is at least one expected fail or skipped test and no failures any failing test must be properly addressed with a new issue detailing the error and the possible cause it must be included in the fixes section of the current release candidate main issue any expected fail or skipped test must have an issue justifying the reason all auditors must validate the justification for an expected fail or skipped test an extended report of the test results must be attached as a zip or txt this report can be used by the auditors to dig deeper into any possible failures and details conclusions all tests have been executed and the results can be found all of them ran successfully and there were no fails i therefore conclude that this issue is ready to be reviewed auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue davidjiglesias selutario
| 0
|
18,762
| 24,664,246,203
|
IssuesEvent
|
2022-10-18 09:05:56
|
TheUltimateC0der/listrr
|
https://api.github.com/repos/TheUltimateC0der/listrr
|
closed
|
List Forking
|
enhancement feature-request processing:server-side processing:api-side
|
Would love the ability to to "fork" a list and essentially duplicate the filters of an existing list into your own and then make further tweaks on top of it.
i.e. you find a list of comedy movies, but only want ones in a certain year range and with a certain rating (or remove mature-rated items)
|
2.0
|
List Forking - Would love the ability to to "fork" a list and essentially duplicate the filters of an existing list into your own and then make further tweaks on top of it.
i.e. you find a list of comedy movies, but only want ones in a certain year range and with a certain rating (or remove mature-rated items)
|
process
|
list forking would love the ability to to fork a list and essentially duplicate the filters of an existing list into your own and then make further tweaks on top of it i e you find a list of comedy movies but only want ones in a certain year range and with a certain rating or remove mature rated items
| 1
|
15,199
| 19,010,228,390
|
IssuesEvent
|
2021-11-23 08:26:02
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
ErrorException Array to string conversion in Process.php when looping the envs (start())
|
Bug Process Status: Needs Review Status: Waiting feedback
|
### Symfony version(s) affected
4.4.34
### Description
https://github.com/symfony/process/blob/5.3/Process.php#L342
In the start() method as of 4.4.34 we are now getting an Array to String conversion ErrorException.
This appears to also be present with the latest Laravel Framework update which is what caused us to start getting this issue. Every thing that uses the Symfony start() method is now throwing this error so its no one specific command causing it.
Seems to be something in the envs is being allowed through as an array. Previous 4.4.34 maybe they were not being stripped, or maybe a change in 4.4.34 allowed an array to end in the envs variable.
### How to reproduce
start() symfony process method given a command line with arguments. In our case one example command was:
`exec '/home/.nvm/versions/node/v12.3.1/bin/node' '/var/www/html/our-script.js' ''\''string_a'\''' ''\''string_b'\'''`
```php
$process = new Process(
[
$node,
base_path() . '/our-script.js',
escapeshellarg($string_a),
escapeshellarg($string_b)
]
);
```
where string_a and string_b are definitely strings ;)
### Possible Solution
_No response_
### Additional Context
Likely related to this PR that went in to 4.4.34:
https://github.com/symfony/symfony/pull/44070
|
1.0
|
ErrorException Array to string conversion in Process.php when looping the envs (start()) - ### Symfony version(s) affected
4.4.34
### Description
https://github.com/symfony/process/blob/5.3/Process.php#L342
In the start() method as of 4.4.34 we are now getting an Array to String conversion ErrorException.
This appears to also be present with the latest Laravel Framework update which is what caused us to start getting this issue. Every thing that uses the Symfony start() method is now throwing this error so its no one specific command causing it.
Seems to be something in the envs is being allowed through as an array. Previous 4.4.34 maybe they were not being stripped, or maybe a change in 4.4.34 allowed an array to end in the envs variable.
### How to reproduce
start() symfony process method given a command line with arguments. In our case one example command was:
`exec '/home/.nvm/versions/node/v12.3.1/bin/node' '/var/www/html/our-script.js' ''\''string_a'\''' ''\''string_b'\'''`
```php
$process = new Process(
[
$node,
base_path() . '/our-script.js',
escapeshellarg($string_a),
escapeshellarg($string_b)
]
);
```
where string_a and string_b are definitely strings ;)
### Possible Solution
_No response_
### Additional Context
Likely related to this PR that went in to 4.4.34:
https://github.com/symfony/symfony/pull/44070
|
process
|
errorexception array to string conversion in process php when looping the envs start symfony version s affected description in the start method as of we are now getting an array to string conversion errorexception this appears to also be present with the latest laravel framework update which is what caused us to start getting this issue every thing that uses the symfony start method is now throwing this error so its no one specific command causing it seems to be something in the envs is being allowed through as an array previous maybe they were not being stripped or maybe a change in allowed an array to end in the envs variable how to reproduce start symfony process method given a command line with arguments in our case one example command was exec home nvm versions node bin node var www html our script js string a string b php process new process node base path our script js escapeshellarg string a escapeshellarg string b where string a and string b are definitely strings possible solution no response additional context likely related to this pr that went in to
| 1
|
4,977
| 7,488,337,199
|
IssuesEvent
|
2018-04-06 00:35:47
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
opened
|
Skills Filter Needed
|
FAI Requirements Ready
|
**User Story:** As a user, I'd like to refine my Open Opportunities search by keywords and other information that opportunity creators have entered.
**Acceptance Criteria:**
Filter name/Text: Skills
o Entries are generated by user typing and Open Opps using Autocomplete function
o As an item is selected, will add a pill to the top center of the page beneath Keywords search box and remain in the skill box
|
1.0
|
Skills Filter Needed - **User Story:** As a user, I'd like to refine my Open Opportunities search by keywords and other information that opportunity creators have entered.
**Acceptance Criteria:**
Filter name/Text: Skills
o Entries are generated by user typing and Open Opps using Autocomplete function
o As an item is selected, will add a pill to the top center of the page beneath Keywords search box and remain in the skill box
|
non_process
|
skills filter needed user story as a user i d like to refine my open opportunities search by keywords and other information that opportunity creators have entered acceptance criteria filter name text skills o entries are generated by user typing and open opps using autocomplete function o as an item is selected will add a pill to the top center of the page beneath keywords search box and remain in the skill box
| 0
|
13,756
| 16,505,982,505
|
IssuesEvent
|
2021-05-25 19:22:25
|
googleapis/google-api-python-client
|
https://api.github.com/repos/googleapis/google-api-python-client
|
opened
|
Trigger new releases automatically
|
type: process
|
Using [auto-approve](https://github.com/googleapis/repo-automation-bots/blob/master/packages/auto-approve/README.md) bot, it's possible to automatically merge release PRs so that releases can go out automatically. Once this is done, we can remove the following text in the migration guide : `"If always using the latest version of a service definition is more important than reliability, users should set the static_discovery argument of discovery.build() to False to retrieve the service definition from the internet."` which is located under the heading [For users of public APIs](https://github.com/googleapis/google-api-python-client/blob/master/UPGRADING.md#for-users-of-public-apis).
|
1.0
|
Trigger new releases automatically - Using [auto-approve](https://github.com/googleapis/repo-automation-bots/blob/master/packages/auto-approve/README.md) bot, it's possible to automatically merge release PRs so that releases can go out automatically. Once this is done, we can remove the following text in the migration guide : `"If always using the latest version of a service definition is more important than reliability, users should set the static_discovery argument of discovery.build() to False to retrieve the service definition from the internet."` which is located under the heading [For users of public APIs](https://github.com/googleapis/google-api-python-client/blob/master/UPGRADING.md#for-users-of-public-apis).
|
process
|
trigger new releases automatically using bot it s possible to automatically merge release prs so that releases can go out automatically once this is done we can remove the following text in the migration guide if always using the latest version of a service definition is more important than reliability users should set the static discovery argument of discovery build to false to retrieve the service definition from the internet which is located under the heading
| 1
|
632,642
| 20,203,289,592
|
IssuesEvent
|
2022-02-11 17:20:48
|
IBMa/equal-access
|
https://api.github.com/repos/IBMa/equal-access
|
opened
|
[A11y_Bug]: Unable to tab in to learn more from a scan results in Mac Firefox browser
|
priority-2 (med)
|
### Project
a11y checker
### Browser
Firefox
### Operating System
MacOS
### Automated testing tool and ruleset
_No response_
### Assistive technology
_No response_
### Description
After we conduct a scan in the Firefox browser, the user in not able to tab in to learn more section.
### Steps to reproduce
1. From Mac FF browser conduct a scan
2. From the results of the scan navigate with tabs into 'Learn more'
User is unable to navigate to lear more.
|
1.0
|
[A11y_Bug]: Unable to tab in to learn more from a scan results in Mac Firefox browser - ### Project
a11y checker
### Browser
Firefox
### Operating System
MacOS
### Automated testing tool and ruleset
_No response_
### Assistive technology
_No response_
### Description
After we conduct a scan in the Firefox browser, the user in not able to tab in to learn more section.
### Steps to reproduce
1. From Mac FF browser conduct a scan
2. From the results of the scan navigate with tabs into 'Learn more'
User is unable to navigate to lear more.
|
non_process
|
unable to tab in to learn more from a scan results in mac firefox browser project checker browser firefox operating system macos automated testing tool and ruleset no response assistive technology no response description after we conduct a scan in the firefox browser the user in not able to tab in to learn more section steps to reproduce from mac ff browser conduct a scan from the results of the scan navigate with tabs into learn more user is unable to navigate to lear more
| 0
|
64,967
| 16,081,352,471
|
IssuesEvent
|
2021-04-26 05:22:23
|
dotnet/msbuild
|
https://api.github.com/repos/dotnet/msbuild
|
closed
|
Don't materialize LazyFormattedBuildEventArgs.Message in packet serializer and binary logger
|
Area: Logging Performance-Scenario-Build Priority:2 performance
|
We can just write the raw message and args instead of realizing the long string unnecessarily. This should help with binlog size (as smaller strings are more reusable) and memory allocations.
|
1.0
|
Don't materialize LazyFormattedBuildEventArgs.Message in packet serializer and binary logger - We can just write the raw message and args instead of realizing the long string unnecessarily. This should help with binlog size (as smaller strings are more reusable) and memory allocations.
|
non_process
|
don t materialize lazyformattedbuildeventargs message in packet serializer and binary logger we can just write the raw message and args instead of realizing the long string unnecessarily this should help with binlog size as smaller strings are more reusable and memory allocations
| 0
|
48,754
| 13,184,730,970
|
IssuesEvent
|
2020-08-12 19:59:29
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
gzip silently ignores archives with multiple members? (Trac #145)
|
Incomplete Migration Migrated from Trac dataio defect
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/145
, reported by troy and owned by nega_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-04-18T03:10:59",
"description": "Check up on this, apparently fixed in 1.36\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1397790659000000",
"component": "dataio",
"summary": "gzip silently ignores archives with multiple members?",
"priority": "minor",
"keywords": "concatenated zip unzip gzip",
"time": "2008-10-12T00:36:49",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
gzip silently ignores archives with multiple members? (Trac #145) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/145
, reported by troy and owned by nega_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-04-18T03:10:59",
"description": "Check up on this, apparently fixed in 1.36\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1397790659000000",
"component": "dataio",
"summary": "gzip silently ignores archives with multiple members?",
"priority": "minor",
"keywords": "concatenated zip unzip gzip",
"time": "2008-10-12T00:36:49",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
gzip silently ignores archives with multiple members trac migrated from reported by troy and owned by nega json status closed changetime description check up on this apparently fixed in n reporter troy cc resolution wont or cant fix ts component dataio summary gzip silently ignores archives with multiple members priority minor keywords concatenated zip unzip gzip time milestone owner nega type defect
| 0
|
16,324
| 20,979,540,222
|
IssuesEvent
|
2022-03-28 18:27:05
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Revise definition 'adhesion of symbiont to host'
|
multi-species process
|
This is currently a subclass of 'symbiotic process' and is defined as:
> The attachment of a symbiont to its host via adhesion molecules, general stickiness etc., either directly or indirectly. The host is defined as the larger of the organisms involved in a symbiotic interaction.
First, I think the proper parent class for this is either going to be 'movement in host environment' or 'interaction with host'. Whatever the decision is for where to move it, I think the definition needs to be edited to reflect what the parent class is. My recommendation is:
> An X in which a symbiont attaches to its host via adhesion molecules, general stickiness etc., either directly or indirectly. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Here, 'X' represents the label for the parent class.
|
1.0
|
Revise definition 'adhesion of symbiont to host' - This is currently a subclass of 'symbiotic process' and is defined as:
> The attachment of a symbiont to its host via adhesion molecules, general stickiness etc., either directly or indirectly. The host is defined as the larger of the organisms involved in a symbiotic interaction.
First, I think the proper parent class for this is either going to be 'movement in host environment' or 'interaction with host'. Whatever the decision is for where to move it, I think the definition needs to be edited to reflect what the parent class is. My recommendation is:
> An X in which a symbiont attaches to its host via adhesion molecules, general stickiness etc., either directly or indirectly. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Here, 'X' represents the label for the parent class.
|
process
|
revise definition adhesion of symbiont to host this is currently a subclass of symbiotic process and is defined as the attachment of a symbiont to its host via adhesion molecules general stickiness etc either directly or indirectly the host is defined as the larger of the organisms involved in a symbiotic interaction first i think the proper parent class for this is either going to be movement in host environment or interaction with host whatever the decision is for where to move it i think the definition needs to be edited to reflect what the parent class is my recommendation is an x in which a symbiont attaches to its host via adhesion molecules general stickiness etc either directly or indirectly the host is defined as the larger of the organisms involved in a symbiotic interaction here x represents the label for the parent class
| 1
|
111,298
| 17,021,162,682
|
IssuesEvent
|
2021-07-02 19:21:16
|
alpersonalwebsite/node-express-postgre
|
https://api.github.com/repos/alpersonalwebsite/node-express-postgre
|
opened
|
WS-2021-0154 (Medium) detected in glob-parent-5.1.0.tgz
|
security vulnerability
|
## WS-2021-0154 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.0.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz</a></p>
<p>Path to dependency file: node-express-postgre/package.json</p>
<p>Path to vulnerable library: node-express-postgre/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-6.8.0.tgz (Root Library)
- :x: **glob-parent-5.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/node-express-postgre/commit/f492c6cd17c7f57babb5687a9d0c405dee11220b">f492c6cd17c7f57babb5687a9d0c405dee11220b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
<p>Publish Date: 2021-01-27
<p>URL: <a href=https://github.com/gulpjs/glob-parent/commit/f9231168b0041fea3f8f954b3cceb56269fc6366>WS-2021-0154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2">https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2</a></p>
<p>Release Date: 2021-01-27</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0154 (Medium) detected in glob-parent-5.1.0.tgz - ## WS-2021-0154 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.0.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz</a></p>
<p>Path to dependency file: node-express-postgre/package.json</p>
<p>Path to vulnerable library: node-express-postgre/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-6.8.0.tgz (Root Library)
- :x: **glob-parent-5.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/node-express-postgre/commit/f492c6cd17c7f57babb5687a9d0c405dee11220b">f492c6cd17c7f57babb5687a9d0c405dee11220b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
<p>Publish Date: 2021-01-27
<p>URL: <a href=https://github.com/gulpjs/glob-parent/commit/f9231168b0041fea3f8f954b3cceb56269fc6366>WS-2021-0154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2">https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2</a></p>
<p>Release Date: 2021-01-27</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in glob parent tgz ws medium severity vulnerability vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file node express postgre package json path to vulnerable library node express postgre node modules glob parent package json dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service redos vulnerability was found in glob parent before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource
| 0
|
3,322
| 6,438,154,723
|
IssuesEvent
|
2017-08-11 02:36:38
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
TestProcessOnRemoteMachineWindows crashes on uapaot test runs
|
area-System.Diagnostics.Process os-windows-uwp test-run-uwp-ilc
|
Test was crashing by throwing an AV when trying to call `Process.GetProcessById(currentProcess.Id, "127.0.0.1")`. I didn't to more deep investigation on it, but I disabled it for now for uapaot runs.
cc: @Priya91
|
1.0
|
TestProcessOnRemoteMachineWindows crashes on uapaot test runs - Test was crashing by throwing an AV when trying to call `Process.GetProcessById(currentProcess.Id, "127.0.0.1")`. I didn't to more deep investigation on it, but I disabled it for now for uapaot runs.
cc: @Priya91
|
process
|
testprocessonremotemachinewindows crashes on uapaot test runs test was crashing by throwing an av when trying to call process getprocessbyid currentprocess id i didn t to more deep investigation on it but i disabled it for now for uapaot runs cc
| 1
|
8,885
| 11,983,327,221
|
IssuesEvent
|
2020-04-07 14:17:25
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Change labels 'modulation by symbiont of defense-related host MAP kinase-mediated signal transduction pathway' and children
|
multi-species process
|
To make the terms more clear: will change the labels as follows:
GO:0052080 modulation by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> modulation by symbiont of host innate immune response MAPK kinase signaling
GO:0052079 induction by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> induction by symbiont of host innate immune response MAPK kinase signaling
GO:0052078 suppression by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> suppression by symbiont of host innate immune response MAPK kinase signaling
|
1.0
|
Change labels 'modulation by symbiont of defense-related host MAP kinase-mediated signal transduction pathway' and children - To make the terms more clear: will change the labels as follows:
GO:0052080 modulation by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> modulation by symbiont of host innate immune response MAPK kinase signaling
GO:0052079 induction by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> induction by symbiont of host innate immune response MAPK kinase signaling
GO:0052078 suppression by symbiont of defense-related host MAP kinase-mediated signal transduction pathway
-> suppression by symbiont of host innate immune response MAPK kinase signaling
|
process
|
change labels modulation by symbiont of defense related host map kinase mediated signal transduction pathway and children to make the terms more clear will change the labels as follows go modulation by symbiont of defense related host map kinase mediated signal transduction pathway modulation by symbiont of host innate immune response mapk kinase signaling go induction by symbiont of defense related host map kinase mediated signal transduction pathway induction by symbiont of host innate immune response mapk kinase signaling go suppression by symbiont of defense related host map kinase mediated signal transduction pathway suppression by symbiont of host innate immune response mapk kinase signaling
| 1
|
339,080
| 10,241,589,950
|
IssuesEvent
|
2019-08-20 01:05:11
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] Get audit log of System(Studio Root) return Project not found
|
bug priority: medium
|
## Describe the bug
Try to get the audit log of site System(Studio Root) fails with the response:
`ApiResponse{code=5000, message='Project not found', remedialAction='Check if
you sent in the right Project Id', documentationUrl=''} `
This was working in 3.1.0
## To Reproduce
Steps to reproduce the behavior:
1. Login to authoring
2. Go to `Audit` in the main menu
3. Filter by site `System`
4. See that there are no logs
## Expected behavior
Log entries displayed. Without sites at least login/logout of admin should be displayed.
## Screenshots
- Current 3.1.1-SNAPSHOT

- Released 3.1.0

## Logs
```
[ERROR] 2019-08-09T12:18:38,117 [http-nio-8080-exec-5] [v2.ExceptionHandlers] | API endpoint http://localhost:8080/studio/api/2/audit?limit=15&offset=0&siteName=Studio+Root&sort=date failed with response: ApiResponse{code=5000, message='Project not found', remedialAction
='Check if you sent in the right Project Id', documentationUrl=''}
org.craftercms.studio.api.v1.exception.SiteNotFoundException: Site Studio Root not found.
at org.craftercms.studio.controller.rest.v2.AuditController.getAuditLog(AuditController.java:86) ~[classes/:3.1.1-SNAPSHOT]
at sun.reflect.GeneratedMethodAccessor252.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) ~[spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) [servlet-api.jar:?]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [servlet-api.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-websocket.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:101) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.craftercms.engine.url.rewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:70) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:100) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.craftercms.studio.impl.v1.web.security.access.StudioAuthenticationTokenProcessingFilter.doFilter(StudioAuthenticationTokenProcessingFilter.java:150) [classes/:3.1.1-SNAPSHOT]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.engine.servlet.filter.ExceptionHandlingFilter.doFilter(ExceptionHandlingFilter.java:56) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:57) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.commons.http.RequestContextBindingFilter.doFilter(RequestContextBindingFilter.java:79) [crafter-commons-utilities-3.1.1-SNAPSHOT.jar:3.1.1-SNAPSHOT]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.studio.impl.v1.web.filter.MultiReadHttpServletRequestWrapperFilter.doFilter(MultiReadHttpServletRequestWrapperFilter.java:33) [classes/:3.1.1-SNAPSHOT]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71) [log4j-web-2.11.2.jar:2.11.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [catalina.jar:8.5.24]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [catalina.jar:8.5.24]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [catalina.jar:8.5.24]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:650) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [catalina.jar:8.5.24]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [catalina.jar:8.5.24]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-coyote.jar:8.5.24]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-coyote.jar:8.5.24]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-coyote.jar:8.5.24]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-coyote.jar:8.5.24]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-coyote.jar:8.5.24]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.5.24]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
```
## Specs
### Version
```
Studio Version Number: 3.1.1-SNAPSHOT-2613ef
Build Number: 2613efd83d9d67e9d56c3e0ceaa458dabd20bf65
Build Date/Time: 08-05-2019 10:44:26 -0600
```
### OS
{{What OS did you use to produce the bug.}}
### Browser
{{What browser did you use to produce the bug.}}
## Additional context
{{Add any other context about the problem here.}}
|
1.0
|
[studio] Get audit log of System(Studio Root) return Project not found - ## Describe the bug
Try to get the audit log of site System(Studio Root) fails with the response:
`ApiResponse{code=5000, message='Project not found', remedialAction='Check if
you sent in the right Project Id', documentationUrl=''} `
This was working in 3.1.0
## To Reproduce
Steps to reproduce the behavior:
1. Login to authoring
2. Go to `Audit` in the main menu
3. Filter by site `System`
4. See that there are no logs
## Expected behavior
Log entries displayed. Without sites at least login/logout of admin should be displayed.
## Screenshots
- Current 3.1.1-SNAPSHOT

- Released 3.1.0

## Logs
```
[ERROR] 2019-08-09T12:18:38,117 [http-nio-8080-exec-5] [v2.ExceptionHandlers] | API endpoint http://localhost:8080/studio/api/2/audit?limit=15&offset=0&siteName=Studio+Root&sort=date failed with response: ApiResponse{code=5000, message='Project not found', remedialAction
='Check if you sent in the right Project Id', documentationUrl=''}
org.craftercms.studio.api.v1.exception.SiteNotFoundException: Site Studio Root not found.
at org.craftercms.studio.controller.rest.v2.AuditController.getAuditLog(AuditController.java:86) ~[classes/:3.1.1-SNAPSHOT]
at sun.reflect.GeneratedMethodAccessor252.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) ~[spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) [servlet-api.jar:?]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) [spring-webmvc-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [servlet-api.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-websocket.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:101) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92) [urlrewritefilter-4.0.4.jar:4.0.4]
at org.craftercms.engine.url.rewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:70) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:100) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.craftercms.studio.impl.v1.web.security.access.StudioAuthenticationTokenProcessingFilter.doFilter(StudioAuthenticationTokenProcessingFilter.java:150) [classes/:3.1.1-SNAPSHOT]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) [spring-security-web-4.2.13.RELEASE.jar:4.2.13.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.engine.servlet.filter.ExceptionHandlingFilter.doFilter(ExceptionHandlingFilter.java:56) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:57) [classes/:3.1.1-SNAPSHOT]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) [spring-web-4.3.18.RELEASE.jar:4.3.18.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.commons.http.RequestContextBindingFilter.doFilter(RequestContextBindingFilter.java:79) [crafter-commons-utilities-3.1.1-SNAPSHOT.jar:3.1.1-SNAPSHOT]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.craftercms.studio.impl.v1.web.filter.MultiReadHttpServletRequestWrapperFilter.doFilter(MultiReadHttpServletRequestWrapperFilter.java:33) [classes/:3.1.1-SNAPSHOT]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71) [log4j-web-2.11.2.jar:2.11.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.24]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [catalina.jar:8.5.24]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [catalina.jar:8.5.24]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [catalina.jar:8.5.24]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:650) [catalina.jar:8.5.24]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [catalina.jar:8.5.24]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [catalina.jar:8.5.24]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-coyote.jar:8.5.24]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-coyote.jar:8.5.24]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-coyote.jar:8.5.24]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-coyote.jar:8.5.24]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-coyote.jar:8.5.24]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.5.24]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
```
## Specs
### Version
```
Studio Version Number: 3.1.1-SNAPSHOT-2613ef
Build Number: 2613efd83d9d67e9d56c3e0ceaa458dabd20bf65
Build Date/Time: 08-05-2019 10:44:26 -0600
```
### OS
{{What OS did you use to produce the bug.}}
### Browser
{{What browser did you use to produce the bug.}}
## Additional context
{{Add any other context about the problem here.}}
|
non_process
|
get audit log of system studio root return project not found describe the bug try to get the audit log of site system studio root fails with the response apiresponse code message project not found remedialaction check if you sent in the right project id documentationurl this was working in to reproduce steps to reproduce the behavior login to authoring go to audit in the main menu filter by site system see that there are no logs expected behavior log entries displayed without sites at least login logout of admin should be displayed screenshots current snapshot released logs api endpoint failed with response apiresponse code message project not found remedialaction check if you sent in the right project id documentationurl org craftercms studio api exception sitenotfoundexception site studio root not found at org craftercms studio controller rest auditcontroller getauditlog auditcontroller java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework web method support invocablehandlermethod doinvoke invocablehandlermethod java at org springframework web method support invocablehandlermethod invokeforrequest invocablehandlermethod java at org springframework web servlet mvc method annotation servletinvocablehandlermethod invokeandhandle servletinvocablehandlermethod java at org springframework web servlet mvc method annotation requestmappinghandleradapter invokehandlermethod requestmappinghandleradapter java at org springframework web servlet mvc method annotation requestmappinghandleradapter handleinternal requestmappinghandleradapter java at org springframework web servlet mvc method abstracthandlermethodadapter handle abstracthandlermethodadapter java at org springframework web servlet dispatcherservlet dodispatch dispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet doget frameworkservlet java at javax servlet http httpservlet service httpservlet java at org springframework web servlet frameworkservlet service frameworkservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org tuckey web filters urlrewrite rulechain handlerewrite rulechain java at org tuckey web filters urlrewrite rulechain dorules rulechain java at org tuckey web filters urlrewrite urlrewriter processrequest urlrewriter java at org craftercms engine url rewrite urlrewritefilter dofilter urlrewritefilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework security web csrf csrffilter dofilterinternal csrffilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web access intercept filtersecurityinterceptor invoke filtersecurityinterceptor java at org springframework security web access intercept filtersecurityinterceptor dofilter filtersecurityinterceptor java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web access exceptiontranslationfilter dofilter exceptiontranslationfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web authentication anonymousauthenticationfilter dofilter anonymousauthenticationfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web servletapi securitycontextholderawarerequestfilter dofilter securitycontextholderawarerequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org craftercms studio impl web security access studioauthenticationtokenprocessingfilter dofilter studioauthenticationtokenprocessingfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web header headerwriterfilter dofilterinternal headerwriterfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web context request async webasyncmanagerintegrationfilter dofilterinternal webasyncmanagerintegrationfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web context securitycontextpersistencefilter dofilter securitycontextpersistencefilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web filterchainproxy dofilterinternal filterchainproxy java at org springframework security web filterchainproxy dofilter filterchainproxy java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms engine servlet filter exceptionhandlingfilter dofilter exceptionhandlingfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms engine servlet filter sitecontextresolvingfilter dofilter sitecontextresolvingfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms commons http requestcontextbindingfilter dofilter requestcontextbindingfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms studio impl web filter multireadhttpservletrequestwrapperfilter dofilter multireadhttpservletrequestwrapperfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache logging web dofilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java specs version studio version number snapshot build number build date time os what os did you use to produce the bug browser what browser did you use to produce the bug additional context add any other context about the problem here
| 0
|
24,171
| 23,424,609,556
|
IssuesEvent
|
2022-08-14 07:38:35
|
Docile-Alligator/Infinity-For-Reddit
|
https://api.github.com/repos/Docile-Alligator/Infinity-For-Reddit
|
closed
|
Image captions missing
|
type: confirmed bug area: usability
|
### Checklist
- [X] I have used the search function for [open](https://github.com/Docile-Alligator/Infinity-For-Reddit/issues) **and** [closed](https://github.com/Docile-Alligator/Infinity-For-Reddit/issues?q=is%3Aissue+is%3Aclosed) issues to see if someone else has already submitted the same bug report.
- [x] I will describe the problem with as much detail as possible.
- [X] If the bug only occurs with a certain link, post, image..., I will include the URL.
### App version
5.2.1
### Where did you get the app from
F-Droid
### Android version
12
### Device model
Galaxy Note 10+
### First occurred
~2 weeks
### Steps to reproduce
See: https://www.reddit.com/r/spiders/comments/wfd8qk/anyone_know_what_kind_of_spider_this_is_brown/
Image has captions, but does not show in Infinity. This is regardless or clearing app cache, restarting app etc
However, see: https://www.reddit.com/r/flashlight/comments/wervag/modded_emisar_d4v1_with_9amp_driver_lighted/?utm_medium=android_app&utm_source=share
Image has captions and DOES show up in infinity.
The issue is consistent between posts, so the posts that it works fine on now, it will continue to work fine. And the posts it doesn't work on, it never works. I don't know why specific posts are having issues and others aren't.
### Expected behaviour
### **First post (has captions and displays captions) works as intended. Shown on reddit app and Infinity**

_As seen, reddit app shows post images has captions_

_As seen, infinity app shows post images has captions as intended_
### **Second post**(has captions but does not display captions) does not work as intended. Shown on reddit app (where captions always show as intended) and Infinity.

_As seen, reddit app shows post images has captions_

_As seen, infinity app does not correctly display caption_
|
True
|
Image captions missing - ### Checklist
- [X] I have used the search function for [open](https://github.com/Docile-Alligator/Infinity-For-Reddit/issues) **and** [closed](https://github.com/Docile-Alligator/Infinity-For-Reddit/issues?q=is%3Aissue+is%3Aclosed) issues to see if someone else has already submitted the same bug report.
- [x] I will describe the problem with as much detail as possible.
- [X] If the bug only occurs with a certain link, post, image..., I will include the URL.
### App version
5.2.1
### Where did you get the app from
F-Droid
### Android version
12
### Device model
Galaxy Note 10+
### First occurred
~2 weeks
### Steps to reproduce
See: https://www.reddit.com/r/spiders/comments/wfd8qk/anyone_know_what_kind_of_spider_this_is_brown/
Image has captions, but does not show in Infinity. This is regardless or clearing app cache, restarting app etc
However, see: https://www.reddit.com/r/flashlight/comments/wervag/modded_emisar_d4v1_with_9amp_driver_lighted/?utm_medium=android_app&utm_source=share
Image has captions and DOES show up in infinity.
The issue is consistent between posts, so the posts that it works fine on now, it will continue to work fine. And the posts it doesn't work on, it never works. I don't know why specific posts are having issues and others aren't.
### Expected behaviour
### **First post (has captions and displays captions) works as intended. Shown on reddit app and Infinity**

_As seen, reddit app shows post images has captions_

_As seen, infinity app shows post images has captions as intended_
### **Second post**(has captions but does not display captions) does not work as intended. Shown on reddit app (where captions always show as intended) and Infinity.

_As seen, reddit app shows post images has captions_

_As seen, infinity app does not correctly display caption_
|
non_process
|
image captions missing checklist i have used the search function for and issues to see if someone else has already submitted the same bug report i will describe the problem with as much detail as possible if the bug only occurs with a certain link post image i will include the url app version where did you get the app from f droid android version device model galaxy note first occurred weeks steps to reproduce see image has captions but does not show in infinity this is regardless or clearing app cache restarting app etc however see image has captions and does show up in infinity the issue is consistent between posts so the posts that it works fine on now it will continue to work fine and the posts it doesn t work on it never works i don t know why specific posts are having issues and others aren t expected behaviour first post has captions and displays captions works as intended shown on reddit app and infinity as seen reddit app shows post images has captions as seen infinity app shows post images has captions as intended second post has captions but does not display captions does not work as intended shown on reddit app where captions always show as intended and infinity as seen reddit app shows post images has captions as seen infinity app does not correctly display caption
| 0
|
2,197
| 5,039,355,331
|
IssuesEvent
|
2016-12-18 19:32:24
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [fr] PAS VU À LA TÉLÉ #6 - L'AUTISME - Invitée : OLIVIA CATTAN
|
Language: French Process: [6] Approved
|
# Video title
PAS VU À LA TÉLÉ # 6 - L'AUTISME - Invitée : OLIVIA CATTAN
# URL
https://www.youtube.com/watch?v=Xo1W8mnVyb8
# Youtube subtitles language
Français
# Duration
1:00:25
# Subtitles URL
https://www.youtube.com/timedtext_editor?ui=hd&v=Xo1W8mnVyb8&action_mde_edit_form=1&lang=fr&ref=player&bl=vmp&tab=captions
|
1.0
|
[subtitles] [fr] PAS VU À LA TÉLÉ #6 - L'AUTISME - Invitée : OLIVIA CATTAN - # Video title
PAS VU À LA TÉLÉ # 6 - L'AUTISME - Invitée : OLIVIA CATTAN
# URL
https://www.youtube.com/watch?v=Xo1W8mnVyb8
# Youtube subtitles language
Français
# Duration
1:00:25
# Subtitles URL
https://www.youtube.com/timedtext_editor?ui=hd&v=Xo1W8mnVyb8&action_mde_edit_form=1&lang=fr&ref=player&bl=vmp&tab=captions
|
process
|
pas vu à la télé l autisme invitée olivia cattan video title pas vu à la télé l autisme invitée olivia cattan url youtube subtitles language français duration subtitles url
| 1
|
60,619
| 12,128,970,366
|
IssuesEvent
|
2020-04-22 21:31:41
|
kwk/test-llvm-bz-import-4
|
https://api.github.com/repos/kwk/test-llvm-bz-import-4
|
opened
|
clang doesn't support pinning variables to registers with __asm__("eax")
|
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED clang/LLVM Codegen dummy import from bugzilla
|
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=3933.
|
1.0
|
clang doesn't support pinning variables to registers with __asm__("eax") - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=3933.
|
non_process
|
clang doesn t support pinning variables to registers with asm eax this issue was imported from bugzilla
| 0
|
285,914
| 31,155,766,206
|
IssuesEvent
|
2023-08-16 13:02:38
|
nidhi7598/linux-4.1.15_CVE-2018-5873
|
https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2018-5873
|
opened
|
CVE-2015-8552 (Medium) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2015-8552 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/xen/xen-pciback/pciback_ops.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/xen/xen-pciback/pciback_ops.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The PCI backend driver in Xen, when running on an x86 system and using Linux 3.1.x through 4.3.x as the driver domain, allows local guest administrators to generate a continuous stream of WARN messages and cause a denial of service (disk consumption) by leveraging a system with access to a passed-through MSI or MSI-X capable physical PCI device and XEN_PCI_OP_enable_msi operations, aka "Linux pciback missing sanity checks."
<p>Publish Date: 2016-04-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8552>CVE-2015-8552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8552">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8552</a></p>
<p>Release Date: 2016-04-13</p>
<p>Fix Resolution: v4.4-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-8552 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2015-8552 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/xen/xen-pciback/pciback_ops.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/xen/xen-pciback/pciback_ops.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The PCI backend driver in Xen, when running on an x86 system and using Linux 3.1.x through 4.3.x as the driver domain, allows local guest administrators to generate a continuous stream of WARN messages and cause a denial of service (disk consumption) by leveraging a system with access to a passed-through MSI or MSI-X capable physical PCI device and XEN_PCI_OP_enable_msi operations, aka "Linux pciback missing sanity checks."
<p>Publish Date: 2016-04-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8552>CVE-2015-8552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8552">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8552</a></p>
<p>Release Date: 2016-04-13</p>
<p>Fix Resolution: v4.4-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files drivers xen xen pciback pciback ops c drivers xen xen pciback pciback ops c vulnerability details the pci backend driver in xen when running on an system and using linux x through x as the driver domain allows local guest administrators to generate a continuous stream of warn messages and cause a denial of service disk consumption by leveraging a system with access to a passed through msi or msi x capable physical pci device and xen pci op enable msi operations aka linux pciback missing sanity checks publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
8,610
| 3,000,184,971
|
IssuesEvent
|
2015-07-23 23:16:06
|
swift-nav/piksi_tools
|
https://api.github.com/repos/swift-nav/piksi_tools
|
closed
|
USRP testing script
|
3 - Testing enhancement tooling
|
Tracking issue for deploying a USRP HITL pipeline. Going to try and finish this EOD.
0. Bash script for running job
1. Testing on relevant machines
2. Setting up analysis pipeline.
/cc @cbeighley @henryhallam @mfine @fnoble @imh @denniszollo
|
1.0
|
USRP testing script - Tracking issue for deploying a USRP HITL pipeline. Going to try and finish this EOD.
0. Bash script for running job
1. Testing on relevant machines
2. Setting up analysis pipeline.
/cc @cbeighley @henryhallam @mfine @fnoble @imh @denniszollo
|
non_process
|
usrp testing script tracking issue for deploying a usrp hitl pipeline going to try and finish this eod bash script for running job testing on relevant machines setting up analysis pipeline cc cbeighley henryhallam mfine fnoble imh denniszollo
| 0
|
46,596
| 19,324,551,260
|
IssuesEvent
|
2021-12-14 09:56:57
|
microsoft/botframework-sdk
|
https://api.github.com/repos/microsoft/botframework-sdk
|
closed
|
Handling multi-turn Dialog or conversation for Alexa skill using Bot framework V4 NodeJS
|
customer-replied-to customer-reported Bot Services ExemptFromDailyDRIReport support
|
The waterfall steps of a dialog are not being visited instead the conversation is getting started from the beginning of the dialog(i.e; from the first waterfall step) for each request made to Alexa Skill. I tried using `return Dialog.EndOfTurn;` and `return {status: DialogTurnStatus.waiting};` so that it waits for user input and forwards to the next waterfall step. I referred to [Bot Community GitHub repo - Alexa adapter Library](https://github.com/BotBuilderCommunity/botbuilder-community-js/tree/master/libraries/botbuilder-adapter-alexa).
Unfortunately, I didn't find a solution but I know `return await step.next();` forwards to the next waterfall step without waiting for the input from the user.
I would like to wait for the user input and then forward it to the next waterfall step. Is there a way or any workaround to achieve this?
Here are the screengrabs of my code-


|
1.0
|
Handling multi-turn Dialog or conversation for Alexa skill using Bot framework V4 NodeJS - The waterfall steps of a dialog are not being visited instead the conversation is getting started from the beginning of the dialog(i.e; from the first waterfall step) for each request made to Alexa Skill. I tried using `return Dialog.EndOfTurn;` and `return {status: DialogTurnStatus.waiting};` so that it waits for user input and forwards to the next waterfall step. I referred to [Bot Community GitHub repo - Alexa adapter Library](https://github.com/BotBuilderCommunity/botbuilder-community-js/tree/master/libraries/botbuilder-adapter-alexa).
Unfortunately, I didn't find a solution but I know `return await step.next();` forwards to the next waterfall step without waiting for the input from the user.
I would like to wait for the user input and then forward it to the next waterfall step. Is there a way or any workaround to achieve this?
Here are the screengrabs of my code-


|
non_process
|
handling multi turn dialog or conversation for alexa skill using bot framework nodejs the waterfall steps of a dialog are not being visited instead the conversation is getting started from the beginning of the dialog i e from the first waterfall step for each request made to alexa skill i tried using return dialog endofturn and return status dialogturnstatus waiting so that it waits for user input and forwards to the next waterfall step i referred to unfortunately i didn t find a solution but i know return await step next forwards to the next waterfall step without waiting for the input from the user i would like to wait for the user input and then forward it to the next waterfall step is there a way or any workaround to achieve this here are the screengrabs of my code
| 0
|
32,933
| 13,957,740,648
|
IssuesEvent
|
2020-10-24 08:25:39
|
microsoft/MixedRealityToolkit-Unity
|
https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity
|
closed
|
Add demo scene for using single-service manager(s)
|
Example/Test Scene Services Stale Tests
|
MRTK v2.0.0 shipped with experimental service manager components that enable customers to include individual services in their scenes (i.e. not use the MixedRealityToolkit object).
To help demonstrate using these services, there should be a version of at least one of the demo scenes that uses the single-service manager(s).
|
1.0
|
Add demo scene for using single-service manager(s) - MRTK v2.0.0 shipped with experimental service manager components that enable customers to include individual services in their scenes (i.e. not use the MixedRealityToolkit object).
To help demonstrate using these services, there should be a version of at least one of the demo scenes that uses the single-service manager(s).
|
non_process
|
add demo scene for using single service manager s mrtk shipped with experimental service manager components that enable customers to include individual services in their scenes i e not use the mixedrealitytoolkit object to help demonstrate using these services there should be a version of at least one of the demo scenes that uses the single service manager s
| 0
|
11,105
| 13,942,784,744
|
IssuesEvent
|
2020-10-22 21:39:09
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
0.37.0-rc3: Filtering a joined table column by "Is not" or "Does not contain" fails
|
.Query Language (MBQL) .Reproduced Priority:P1 Querying/GUI Querying/Processor Type:Bug
|
**Describe the bug**
Filtering a joined table column by "Is not" or "Does not contain" fails because it's referencing the original table name, when doing the `null` check, instead of the aliased table name.
**To Reproduce**
1. Custom question > Sample Dataset > Orders
2. Join table Products
3. Filter by Product.Category "Is not" (or "Does not contain") `Gizmo`

4. Fails with `Column "PUBLIC.PRODUCTS.CATEGORY" not found` because the table reference should be the aliased name
<details><summary>Full stacktrace</summary>
```
2020-10-20 13:56:39,264 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 4,
:started_at #t "2020-10-20T13:56:38.884429+02:00[Europe/Copenhagen]",
:state "42S22",
:json_query
{:type "query",
:query
{:source-table 11,
:joins [{:fields "all", :source-table 10, :condition ["=" ["field-id" 82] ["joined-field" "Products" ["field-id" 105]]], :alias "Products"}],
:filter ["does-not-contain" ["joined-field" "Products" ["field-id" 107]] "Gizmo" {:case-sensitive false}]},
:database 4,
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native
{:query
"SELECT \"PUBLIC\".\"ORDERS\".\"ID\" AS \"ID\", \"PUBLIC\".\"ORDERS\".\"USER_ID\" AS \"USER_ID\", \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" AS \"PRODUCT_ID\", \"PUBLIC\".\"ORDERS\".\"SUBTOTAL\" AS \"SUBTOTAL\", \"PUBLIC\".\"ORDERS\".\"TAX\" AS \"TAX\", \"PUBLIC\".\"ORDERS\".\"TOTAL\" AS \"TOTAL\", \"PUBLIC\".\"ORDERS\".\"DISCOUNT\" AS \"DISCOUNT\", \"PUBLIC\".\"ORDERS\".\"CREATED_AT\" AS \"CREATED_AT\", \"PUBLIC\".\"ORDERS\".\"QUANTITY\" AS \"QUANTITY\", \"Products\".\"ID\" AS \"ID_2\", \"Products\".\"EAN\" AS \"EAN\", \"Products\".\"TITLE\" AS \"TITLE\", \"Products\".\"CATEGORY\" AS \"CATEGORY\", \"Products\".\"VENDOR\" AS \"VENDOR\", \"Products\".\"PRICE\" AS \"PRICE\", \"Products\".\"RATING\" AS \"RATING\", \"Products\".\"CREATED_AT\" AS \"CREATED_AT_2\" FROM \"PUBLIC\".\"ORDERS\" LEFT JOIN \"PUBLIC\".\"PRODUCTS\" \"Products\" ON \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" = \"Products\".\"ID\" WHERE (NOT (lower(\"Products\".\"CATEGORY\") like ?) OR \"PUBLIC\".\"PRODUCTS\".\"CATEGORY\" IS NULL) LIMIT 2000",
:params ("%gizmo%")},
:status :failed,
:class org.h2.jdbc.JdbcSQLException,
:stacktrace
["org.h2.message.DbException.getJdbcSQLException(DbException.java:357)"
"org.h2.message.DbException.get(DbException.java:179)"
"org.h2.message.DbException.get(DbException.java:155)"
"org.h2.expression.ExpressionColumn.optimize(ExpressionColumn.java:150)"
"org.h2.expression.Comparison.optimize(Comparison.java:177)"
"org.h2.expression.ConditionAndOr.optimize(ConditionAndOr.java:131)"
"org.h2.command.dml.Select.prepare(Select.java:861)"
"org.h2.command.Parser.prepareCommand(Parser.java:283)"
"org.h2.engine.Session.prepareLocal(Session.java:611)"
"org.h2.engine.Session.prepareCommand(Session.java:549)"
"org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247)"
"org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:76)"
"org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:1135)"
"com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:267)"
"--> driver.sql_jdbc.execute$fn__75975.invokeStatic(execute.clj:239)"
"driver.sql_jdbc.execute$fn__75975.invoke(execute.clj:237)"
"driver.sql_jdbc.execute$prepared_statement_STAR_.invokeStatic(execute.clj:257)"
"driver.sql_jdbc.execute$prepared_statement_STAR_.invoke(execute.clj:254)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:387)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__77370.invokeStatic(sql_jdbc.clj:49)"
"driver.sql_jdbc$fn__77370.invoke(sql_jdbc.clj:47)"
"driver.h2$fn__76184.invokeStatic(h2.clj:87)"
"driver.h2$fn__76184.invoke(h2.clj:84)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45913.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__45189.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__46078.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43995.invoke(auto_parse_filter_values.clj:44)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47620.invoke(wrap_value_literals.clj:149)"
"query_processor.middleware.annotate$add_column_info$fn__43757.invoke(annotate.clj:574)"
"query_processor.middleware.permissions$check_query_permissions$fn__45064.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46596.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45262.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__47128.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39482.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.large_int_id$convert_id_to_string$fn__45874.invoke(large_int_id.clj:44)"
"query_processor.middleware.limit$limit$fn__45899.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__45854.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__45328.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__44354.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__44870.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__39031.invoke(add_dimension_projections.clj:318)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39238.invoke(add_implicit_clauses.clj:141)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39631.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46793.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43942.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44917.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__46578.invoke(parameters.clj:114)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44969.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__45584.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39662.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47490.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46804$fn__46808.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46804.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45802.invoke(fetch_source_query.clj:267)"
"query_processor.middleware.store$initialize_store$fn__47499$fn__47500.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__47499.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__44846.invoke(cache.clj:209)"
"query_processor.middleware.validate$validate_query$fn__47508.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__45926.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39500.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47475.invoke(results_metadata.clj:147)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__45205.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__46667.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__45148.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___38294$thunk__38295.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___38294.invoke(reducible.clj:107)"
"query_processor.reducible$sync_qp$qp_STAR___38303$fn__38306.invoke(reducible.clj:133)"
"query_processor.reducible$sync_qp$qp_STAR___38303.invoke(reducible.clj:132)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:217)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:213)"
"query_processor$fn__47664$process_query_and_save_execution_BANG___47673$fn__47676.invoke(query_processor.clj:229)"
"query_processor$fn__47664$process_query_and_save_execution_BANG___47673.invoke(query_processor.clj:221)"
"query_processor$fn__47708$process_query_and_save_with_max_results_constraints_BANG___47717$fn__47720.invoke(query_processor.clj:241)"
"query_processor$fn__47708$process_query_and_save_with_max_results_constraints_BANG___47717.invoke(query_processor.clj:234)"
"api.dataset$fn__50999$fn__51002.invoke(dataset.clj:55)"
"query_processor.streaming$streaming_response_STAR_$fn__35706$fn__35707.invoke(streaming.clj:73)"
"query_processor.streaming$streaming_response_STAR_$fn__35706.invoke(streaming.clj:72)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)"
"async.streaming_response$do_f_async$fn__23303.invoke(streaming_response.clj:85)"],
:context :ad-hoc,
:error
"Column \"PUBLIC.PRODUCTS.CATEGORY\" not found; SQL statement:\n-- Metabase:: userID: 1 queryType: MBQL queryHash: 70c570549924907a07a133ec84dd1ba9c83bacae1c26cb4a06bb666b9cb7b7cc\nSELECT \"PUBLIC\".\"ORDERS\".\"ID\" AS \"ID\", \"PUBLIC\".\"ORDERS\".\"USER_ID\" AS \"USER_ID\", \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" AS \"PRODUCT_ID\", \"PUBLIC\".\"ORDERS\".\"SUBTOTAL\" AS \"SUBTOTAL\", \"PUBLIC\".\"ORDERS\".\"TAX\" AS \"TAX\", \"PUBLIC\".\"ORDERS\".\"TOTAL\" AS \"TOTAL\", \"PUBLIC\".\"ORDERS\".\"DISCOUNT\" AS \"DISCOUNT\", \"PUBLIC\".\"ORDERS\".\"CREATED_AT\" AS \"CREATED_AT\", \"PUBLIC\".\"ORDERS\".\"QUANTITY\" AS \"QUANTITY\", \"Products\".\"ID\" AS \"ID_2\", \"Products\".\"EAN\" AS \"EAN\", \"Products\".\"TITLE\" AS \"TITLE\", \"Products\".\"CATEGORY\" AS \"CATEGORY\", \"Products\".\"VENDOR\" AS \"VENDOR\", \"Products\".\"PRICE\" AS \"PRICE\", \"Products\".\"RATING\" AS \"RATING\", \"Products\".\"CREATED_AT\" AS \"CREATED_AT_2\" FROM \"PUBLIC\".\"ORDERS\" LEFT JOIN \"PUBLIC\".\"PRODUCTS\" \"Products\" ON \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" = \"Products\".\"ID\" WHERE (NOT (lower(\"Products\".\"CATEGORY\") like ?) OR \"PUBLIC\".\"PRODUCTS\".\"CATEGORY\" IS NULL) LIMIT 2000 [42122-197]",
:row_count 0,
:running_time 0,
:preprocessed
{:type :query,
:query
{:source-table 11,
:joins [{:strategy :left-join, :source-table 10, :condition [:= [:field-id 82] [:joined-field "Products" [:field-id 105]]], :alias "Products"}],
:filter
[:not
[:contains [:joined-field "Products" [:field-id 107]] [:value "Gizmo" {:base_type :type/Text, :special_type :type/Category, :database_type "VARCHAR", :name "CATEGORY"}] {:case-sensitive false}]],
:fields
[[:field-id 83]
[:field-id 80]
[:field-id 82]
[:field-id 84]
[:field-id 87]
[:field-id 88]
[:field-id 81]
[:datetime-field [:field-id 86] :default]
[:field-id 85]
[:joined-field "Products" [:field-id 105]]
[:joined-field "Products" [:field-id 102]]
[:joined-field "Products" [:field-id 106]]
[:joined-field "Products" [:field-id 107]]
[:joined-field "Products" [:field-id 109]]
[:joined-field "Products" [:field-id 104]]
[:joined-field "Products" [:field-id 103]]
[:datetime-field [:joined-field "Products" [:field-id 108]] :default]],
:limit 2000},
:database 4,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info
{:executed-by 1,
:context :ad-hoc,
:nested? false,
:query-hash [112, -59, 112, 84, -103, 36, -112, 122, 7, -95, 51, -20, -124, -35, 27, -87, -56, 59, -84, -82, 28, 38, -53, 74, 6, -69, 102, 107, -100, -73, -73, -52]},
:constraints {:max-results 10000, :max-results-bare-rows 2000}},
:data {:rows [], :cols []}}
2020-10-20 13:56:39,269 DEBUG middleware.log :: POST /api/dataset 202 [ASYNC: completed] 388.2 ms (22 DB calls) App DB connections: 0/7 Jetty threads: 3/50 (3 idle, 0 queued) (81 total active threads) Queries in flight: 0 (0 queued)
```
</details>
**Information about your Metabase Installation:**
Metabase 0.37.0-rc3
**Additional context**
Likely caused by #13477 (#13332)
|
1.0
|
0.37.0-rc3: Filtering a joined table column by "Is not" or "Does not contain" fails - **Describe the bug**
Filtering a joined table column by "Is not" or "Does not contain" fails because it's referencing the original table name, when doing the `null` check, instead of the aliased table name.
**To Reproduce**
1. Custom question > Sample Dataset > Orders
2. Join table Products
3. Filter by Product.Category "Is not" (or "Does not contain") `Gizmo`

4. Fails with `Column "PUBLIC.PRODUCTS.CATEGORY" not found` because the table reference should be the aliased name
<details><summary>Full stacktrace</summary>
```
2020-10-20 13:56:39,264 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 4,
:started_at #t "2020-10-20T13:56:38.884429+02:00[Europe/Copenhagen]",
:state "42S22",
:json_query
{:type "query",
:query
{:source-table 11,
:joins [{:fields "all", :source-table 10, :condition ["=" ["field-id" 82] ["joined-field" "Products" ["field-id" 105]]], :alias "Products"}],
:filter ["does-not-contain" ["joined-field" "Products" ["field-id" 107]] "Gizmo" {:case-sensitive false}]},
:database 4,
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native
{:query
"SELECT \"PUBLIC\".\"ORDERS\".\"ID\" AS \"ID\", \"PUBLIC\".\"ORDERS\".\"USER_ID\" AS \"USER_ID\", \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" AS \"PRODUCT_ID\", \"PUBLIC\".\"ORDERS\".\"SUBTOTAL\" AS \"SUBTOTAL\", \"PUBLIC\".\"ORDERS\".\"TAX\" AS \"TAX\", \"PUBLIC\".\"ORDERS\".\"TOTAL\" AS \"TOTAL\", \"PUBLIC\".\"ORDERS\".\"DISCOUNT\" AS \"DISCOUNT\", \"PUBLIC\".\"ORDERS\".\"CREATED_AT\" AS \"CREATED_AT\", \"PUBLIC\".\"ORDERS\".\"QUANTITY\" AS \"QUANTITY\", \"Products\".\"ID\" AS \"ID_2\", \"Products\".\"EAN\" AS \"EAN\", \"Products\".\"TITLE\" AS \"TITLE\", \"Products\".\"CATEGORY\" AS \"CATEGORY\", \"Products\".\"VENDOR\" AS \"VENDOR\", \"Products\".\"PRICE\" AS \"PRICE\", \"Products\".\"RATING\" AS \"RATING\", \"Products\".\"CREATED_AT\" AS \"CREATED_AT_2\" FROM \"PUBLIC\".\"ORDERS\" LEFT JOIN \"PUBLIC\".\"PRODUCTS\" \"Products\" ON \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" = \"Products\".\"ID\" WHERE (NOT (lower(\"Products\".\"CATEGORY\") like ?) OR \"PUBLIC\".\"PRODUCTS\".\"CATEGORY\" IS NULL) LIMIT 2000",
:params ("%gizmo%")},
:status :failed,
:class org.h2.jdbc.JdbcSQLException,
:stacktrace
["org.h2.message.DbException.getJdbcSQLException(DbException.java:357)"
"org.h2.message.DbException.get(DbException.java:179)"
"org.h2.message.DbException.get(DbException.java:155)"
"org.h2.expression.ExpressionColumn.optimize(ExpressionColumn.java:150)"
"org.h2.expression.Comparison.optimize(Comparison.java:177)"
"org.h2.expression.ConditionAndOr.optimize(ConditionAndOr.java:131)"
"org.h2.command.dml.Select.prepare(Select.java:861)"
"org.h2.command.Parser.prepareCommand(Parser.java:283)"
"org.h2.engine.Session.prepareLocal(Session.java:611)"
"org.h2.engine.Session.prepareCommand(Session.java:549)"
"org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247)"
"org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:76)"
"org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:1135)"
"com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:267)"
"--> driver.sql_jdbc.execute$fn__75975.invokeStatic(execute.clj:239)"
"driver.sql_jdbc.execute$fn__75975.invoke(execute.clj:237)"
"driver.sql_jdbc.execute$prepared_statement_STAR_.invokeStatic(execute.clj:257)"
"driver.sql_jdbc.execute$prepared_statement_STAR_.invoke(execute.clj:254)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:387)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__77370.invokeStatic(sql_jdbc.clj:49)"
"driver.sql_jdbc$fn__77370.invoke(sql_jdbc.clj:47)"
"driver.h2$fn__76184.invokeStatic(h2.clj:87)"
"driver.h2$fn__76184.invoke(h2.clj:84)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45913.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__45189.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__46078.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43995.invoke(auto_parse_filter_values.clj:44)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47620.invoke(wrap_value_literals.clj:149)"
"query_processor.middleware.annotate$add_column_info$fn__43757.invoke(annotate.clj:574)"
"query_processor.middleware.permissions$check_query_permissions$fn__45064.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46596.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45262.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__47128.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39482.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.large_int_id$convert_id_to_string$fn__45874.invoke(large_int_id.clj:44)"
"query_processor.middleware.limit$limit$fn__45899.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__45854.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__45328.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__44354.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__44870.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__39031.invoke(add_dimension_projections.clj:318)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39238.invoke(add_implicit_clauses.clj:141)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39631.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46793.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43942.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44917.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__46578.invoke(parameters.clj:114)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44969.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__45584.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39662.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47490.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46804$fn__46808.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46804.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45802.invoke(fetch_source_query.clj:267)"
"query_processor.middleware.store$initialize_store$fn__47499$fn__47500.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__47499.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__44846.invoke(cache.clj:209)"
"query_processor.middleware.validate$validate_query$fn__47508.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__45926.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39500.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47475.invoke(results_metadata.clj:147)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__45205.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__46667.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__45148.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___38294$thunk__38295.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___38294.invoke(reducible.clj:107)"
"query_processor.reducible$sync_qp$qp_STAR___38303$fn__38306.invoke(reducible.clj:133)"
"query_processor.reducible$sync_qp$qp_STAR___38303.invoke(reducible.clj:132)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:217)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:213)"
"query_processor$fn__47664$process_query_and_save_execution_BANG___47673$fn__47676.invoke(query_processor.clj:229)"
"query_processor$fn__47664$process_query_and_save_execution_BANG___47673.invoke(query_processor.clj:221)"
"query_processor$fn__47708$process_query_and_save_with_max_results_constraints_BANG___47717$fn__47720.invoke(query_processor.clj:241)"
"query_processor$fn__47708$process_query_and_save_with_max_results_constraints_BANG___47717.invoke(query_processor.clj:234)"
"api.dataset$fn__50999$fn__51002.invoke(dataset.clj:55)"
"query_processor.streaming$streaming_response_STAR_$fn__35706$fn__35707.invoke(streaming.clj:73)"
"query_processor.streaming$streaming_response_STAR_$fn__35706.invoke(streaming.clj:72)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)"
"async.streaming_response$do_f_async$fn__23303.invoke(streaming_response.clj:85)"],
:context :ad-hoc,
:error
"Column \"PUBLIC.PRODUCTS.CATEGORY\" not found; SQL statement:\n-- Metabase:: userID: 1 queryType: MBQL queryHash: 70c570549924907a07a133ec84dd1ba9c83bacae1c26cb4a06bb666b9cb7b7cc\nSELECT \"PUBLIC\".\"ORDERS\".\"ID\" AS \"ID\", \"PUBLIC\".\"ORDERS\".\"USER_ID\" AS \"USER_ID\", \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" AS \"PRODUCT_ID\", \"PUBLIC\".\"ORDERS\".\"SUBTOTAL\" AS \"SUBTOTAL\", \"PUBLIC\".\"ORDERS\".\"TAX\" AS \"TAX\", \"PUBLIC\".\"ORDERS\".\"TOTAL\" AS \"TOTAL\", \"PUBLIC\".\"ORDERS\".\"DISCOUNT\" AS \"DISCOUNT\", \"PUBLIC\".\"ORDERS\".\"CREATED_AT\" AS \"CREATED_AT\", \"PUBLIC\".\"ORDERS\".\"QUANTITY\" AS \"QUANTITY\", \"Products\".\"ID\" AS \"ID_2\", \"Products\".\"EAN\" AS \"EAN\", \"Products\".\"TITLE\" AS \"TITLE\", \"Products\".\"CATEGORY\" AS \"CATEGORY\", \"Products\".\"VENDOR\" AS \"VENDOR\", \"Products\".\"PRICE\" AS \"PRICE\", \"Products\".\"RATING\" AS \"RATING\", \"Products\".\"CREATED_AT\" AS \"CREATED_AT_2\" FROM \"PUBLIC\".\"ORDERS\" LEFT JOIN \"PUBLIC\".\"PRODUCTS\" \"Products\" ON \"PUBLIC\".\"ORDERS\".\"PRODUCT_ID\" = \"Products\".\"ID\" WHERE (NOT (lower(\"Products\".\"CATEGORY\") like ?) OR \"PUBLIC\".\"PRODUCTS\".\"CATEGORY\" IS NULL) LIMIT 2000 [42122-197]",
:row_count 0,
:running_time 0,
:preprocessed
{:type :query,
:query
{:source-table 11,
:joins [{:strategy :left-join, :source-table 10, :condition [:= [:field-id 82] [:joined-field "Products" [:field-id 105]]], :alias "Products"}],
:filter
[:not
[:contains [:joined-field "Products" [:field-id 107]] [:value "Gizmo" {:base_type :type/Text, :special_type :type/Category, :database_type "VARCHAR", :name "CATEGORY"}] {:case-sensitive false}]],
:fields
[[:field-id 83]
[:field-id 80]
[:field-id 82]
[:field-id 84]
[:field-id 87]
[:field-id 88]
[:field-id 81]
[:datetime-field [:field-id 86] :default]
[:field-id 85]
[:joined-field "Products" [:field-id 105]]
[:joined-field "Products" [:field-id 102]]
[:joined-field "Products" [:field-id 106]]
[:joined-field "Products" [:field-id 107]]
[:joined-field "Products" [:field-id 109]]
[:joined-field "Products" [:field-id 104]]
[:joined-field "Products" [:field-id 103]]
[:datetime-field [:joined-field "Products" [:field-id 108]] :default]],
:limit 2000},
:database 4,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info
{:executed-by 1,
:context :ad-hoc,
:nested? false,
:query-hash [112, -59, 112, 84, -103, 36, -112, 122, 7, -95, 51, -20, -124, -35, 27, -87, -56, 59, -84, -82, 28, 38, -53, 74, 6, -69, 102, 107, -100, -73, -73, -52]},
:constraints {:max-results 10000, :max-results-bare-rows 2000}},
:data {:rows [], :cols []}}
2020-10-20 13:56:39,269 DEBUG middleware.log :: POST /api/dataset 202 [ASYNC: completed] 388.2 ms (22 DB calls) App DB connections: 0/7 Jetty threads: 3/50 (3 idle, 0 queued) (81 total active threads) Queries in flight: 0 (0 queued)
```
</details>
**Information about your Metabase Installation:**
Metabase 0.37.0-rc3
**Additional context**
Likely caused by #13477 (#13332)
|
process
|
filtering a joined table column by is not or does not contain fails describe the bug filtering a joined table column by is not or does not contain fails because it s referencing the original table name when doing the null check instead of the aliased table name to reproduce custom question sample dataset orders join table products filter by product category is not or does not contain gizmo fails with column public products category not found because the table reference should be the aliased name full stacktrace error middleware catch exceptions error processing query null database id started at t state json query type query query source table joins alias products filter gizmo case sensitive false database parameters middleware js int to string true add default userland constraints true native query select public orders id as id public orders user id as user id public orders product id as product id public orders subtotal as subtotal public orders tax as tax public orders total as total public orders discount as discount public orders created at as created at public orders quantity as quantity products id as id products ean as ean products title as title products category as category products vendor as vendor products price as price products rating as rating products created at as created at from public orders left join public products products on public orders product id products id where not lower products category like or public products category is null limit params gizmo status failed class org jdbc jdbcsqlexception stacktrace org message dbexception getjdbcsqlexception dbexception java org message dbexception get dbexception java org message dbexception get dbexception java org expression expressioncolumn optimize expressioncolumn java org expression comparison optimize comparison java org expression conditionandor optimize conditionandor java org command dml select prepare select java org command parser preparecommand parser java org engine session preparelocal session java org engine session preparecommand session java org jdbc jdbcconnection preparecommand jdbcconnection java org jdbc jdbcpreparedstatement jdbcpreparedstatement java org jdbc jdbcconnection preparestatement jdbcconnection java com mchange impl newproxyconnection preparestatement newproxyconnection java driver sql jdbc execute fn invokestatic execute clj driver sql jdbc execute fn invoke execute clj driver sql jdbc execute prepared statement star invokestatic execute clj driver sql jdbc execute prepared statement star invoke execute clj driver sql jdbc execute execute reducible query invokestatic execute clj driver sql jdbc execute execute reducible query invoke execute clj driver sql jdbc fn invokestatic sql jdbc clj driver sql jdbc fn invoke sql jdbc clj driver fn invokestatic clj driver fn invoke clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible pivot invokestatic reducible clj query processor reducible pivot invoke reducible clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware check features check features fn invoke check features clj query processor middleware optimize datetime filters optimize datetime filters fn invoke optimize datetime filters clj query processor middleware auto parse filter values auto parse filter values fn invoke auto parse filter values clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware large int id convert id to string fn invoke large int id clj query processor middleware limit limit fn invoke limit clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset fn fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async fn invoke streaming response clj context ad hoc error column public products category not found sql statement n metabase userid querytype mbql queryhash nselect public orders id as id public orders user id as user id public orders product id as product id public orders subtotal as subtotal public orders tax as tax public orders total as total public orders discount as discount public orders created at as created at public orders quantity as quantity products id as id products ean as ean products title as title products category as category products vendor as vendor products price as price products rating as rating products created at as created at from public orders left join public products products on public orders product id products id where not lower products category like or public products category is null limit row count running time preprocessed type query query source table joins alias products filter not case sensitive false fields default default limit database middleware js int to string true add default userland constraints true info executed by context ad hoc nested false query hash constraints max results max results bare rows data rows cols debug middleware log post api dataset ms db calls app db connections jetty threads idle queued total active threads queries in flight queued information about your metabase installation metabase additional context likely caused by
| 1
|
54,870
| 6,415,282,742
|
IssuesEvent
|
2017-08-08 12:26:07
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
Tool for reproduce errors
|
!IMPORTANT! AREA: testing
|
Often customer report issue with error and cann't provide the public url for reproduce.
We need to create a way (separate tool or build-in feature) to record request and responses and play it on our side.
|
1.0
|
Tool for reproduce errors - Often customer report issue with error and cann't provide the public url for reproduce.
We need to create a way (separate tool or build-in feature) to record request and responses and play it on our side.
|
non_process
|
tool for reproduce errors often customer report issue with error and cann t provide the public url for reproduce we need to create a way separate tool or build in feature to record request and responses and play it on our side
| 0
|
18,662
| 24,581,748,544
|
IssuesEvent
|
2022-10-13 16:07:43
|
gitpod-io/gitpod
|
https://api.github.com/repos/gitpod-io/gitpod
|
opened
|
Using dotfiles to Append to ~/.bashrc Causes Workspace to Stop Immediately After Starting When VS Code Desktop Is Selected
|
type: bug meta: never-stale feature: dotfiles aspect: desktop IDE aspect: browser IDE aspect: error-handling aspect: gitpod loading process
|
### Bug description
When using the dotfiles feature AND attempting to add a new file into `$HOME/.bashrc.d/` OR appending to `$HOME/.bashrc` AND having your IDE preference selected to VS Code Desktop, the workspace is unable to start up. The workspace seems to go through the Initializing Content step, but then stops.
This video shows me trying to start a workspace with VS Code Desktop selected under Settings > Preferences:
https://www.loom.com/share/f45019e94b2a48cab4c063e2a96563a1
This video shows me trying to start a workspace with VS Code Browser selected:
https://www.loom.com/share/f2ca8c85d0d04162958ebb111e3fa95d
As you can see in the former video the workspace just stops. The later video shows the workspace coming up in my web browser.
### Steps to reproduce
1. Create a new repository on GitHub.
2. Within your new repository create a file `install.sh` with the following content:
```
#!/bin/bash
brew install starship
echo 'eval "$(starship init bash)"' >> $HOME/.bashrc
```
3. Make sure to make your `install.sh` file executable (ex. `chmod +x ./install.sh`).
4. Go to https://gitpod.io/preferences and select VS Code as your IDE.
5. Scroll down and input the URL of your new dotfiles repo into the 'Repository URL' under the Dotfiles section.
6. Click on the 'Save Changes' button.
7. Now navigate to https://gitpod.io/workspaces and open a new workspace.
8. Observe the workspace starting up and then stopping.
### Workspace affected
_No response_
### Expected behavior
I would expect that the workspace start up and prompt me to open in VS Code and if it fails too tell me why it's not and instead stopping.
### Example repository
You can use my dotfiles repo if you'd like - https://github.com/jimmybrancaccio/starship-dotfiles/blob/main/install.sh
This is the test repo I am using to create my workspace from - https://github.com/jimmybrancaccio/gitpod-test
### Anything else?
_No response_
|
1.0
|
Using dotfiles to Append to ~/.bashrc Causes Workspace to Stop Immediately After Starting When VS Code Desktop Is Selected - ### Bug description
When using the dotfiles feature AND attempting to add a new file into `$HOME/.bashrc.d/` OR appending to `$HOME/.bashrc` AND having your IDE preference selected to VS Code Desktop, the workspace is unable to start up. The workspace seems to go through the Initializing Content step, but then stops.
This video shows me trying to start a workspace with VS Code Desktop selected under Settings > Preferences:
https://www.loom.com/share/f45019e94b2a48cab4c063e2a96563a1
This video shows me trying to start a workspace with VS Code Browser selected:
https://www.loom.com/share/f2ca8c85d0d04162958ebb111e3fa95d
As you can see in the former video the workspace just stops. The later video shows the workspace coming up in my web browser.
### Steps to reproduce
1. Create a new repository on GitHub.
2. Within your new repository create a file `install.sh` with the following content:
```
#!/bin/bash
brew install starship
echo 'eval "$(starship init bash)"' >> $HOME/.bashrc
```
3. Make sure to make your `install.sh` file executable (ex. `chmod +x ./install.sh`).
4. Go to https://gitpod.io/preferences and select VS Code as your IDE.
5. Scroll down and input the URL of your new dotfiles repo into the 'Repository URL' under the Dotfiles section.
6. Click on the 'Save Changes' button.
7. Now navigate to https://gitpod.io/workspaces and open a new workspace.
8. Observe the workspace starting up and then stopping.
### Workspace affected
_No response_
### Expected behavior
I would expect that the workspace start up and prompt me to open in VS Code and if it fails too tell me why it's not and instead stopping.
### Example repository
You can use my dotfiles repo if you'd like - https://github.com/jimmybrancaccio/starship-dotfiles/blob/main/install.sh
This is the test repo I am using to create my workspace from - https://github.com/jimmybrancaccio/gitpod-test
### Anything else?
_No response_
|
process
|
using dotfiles to append to bashrc causes workspace to stop immediately after starting when vs code desktop is selected bug description when using the dotfiles feature and attempting to add a new file into home bashrc d or appending to home bashrc and having your ide preference selected to vs code desktop the workspace is unable to start up the workspace seems to go through the initializing content step but then stops this video shows me trying to start a workspace with vs code desktop selected under settings preferences this video shows me trying to start a workspace with vs code browser selected as you can see in the former video the workspace just stops the later video shows the workspace coming up in my web browser steps to reproduce create a new repository on github within your new repository create a file install sh with the following content bin bash brew install starship echo eval starship init bash home bashrc make sure to make your install sh file executable ex chmod x install sh go to and select vs code as your ide scroll down and input the url of your new dotfiles repo into the repository url under the dotfiles section click on the save changes button now navigate to and open a new workspace observe the workspace starting up and then stopping workspace affected no response expected behavior i would expect that the workspace start up and prompt me to open in vs code and if it fails too tell me why it s not and instead stopping example repository you can use my dotfiles repo if you d like this is the test repo i am using to create my workspace from anything else no response
| 1
|
732,936
| 25,281,420,617
|
IssuesEvent
|
2022-11-16 16:02:15
|
mebjas/html5-qrcode
|
https://api.github.com/repos/mebjas/html5-qrcode
|
closed
|
UPC isn’t detected
|
Priority1
|
**Describe the bug**
UPC-A and UPC-E codes doesn't seem to be detected even when configured to
I've tried both my own implementation of Html5Qrcode and with you example page. Tried your example UPC pictures, several codes on actual product boxes, and picture from Google search. None are detected.
Other barcodes such as EAN work.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Smartphone (please complete the following information):**
- Device: iPhone Mini 12 and iPhone 7
- OS: iOS 15 and iOS 14
- Browser: Safari
**Additional context**



|
1.0
|
UPC isn’t detected - **Describe the bug**
UPC-A and UPC-E codes doesn't seem to be detected even when configured to
I've tried both my own implementation of Html5Qrcode and with you example page. Tried your example UPC pictures, several codes on actual product boxes, and picture from Google search. None are detected.
Other barcodes such as EAN work.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Smartphone (please complete the following information):**
- Device: iPhone Mini 12 and iPhone 7
- OS: iOS 15 and iOS 14
- Browser: Safari
**Additional context**



|
non_process
|
upc isn’t detected describe the bug upc a and upc e codes doesn t seem to be detected even when configured to i ve tried both my own implementation of and with you example page tried your example upc pictures several codes on actual product boxes and picture from google search none are detected other barcodes such as ean work screenshots if applicable add screenshots to help explain your problem smartphone please complete the following information device iphone mini and iphone os ios and ios browser safari additional context
| 0
|
10,021
| 13,043,925,562
|
IssuesEvent
|
2020-07-29 03:04:15
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `JsonKeysSig` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `JsonKeysSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `JsonKeysSig` from TiDB -
## Description
Port the scalar function `JsonKeysSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function jsonkeyssig from tidb description port the scalar function jsonkeyssig from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
15,894
| 20,092,746,685
|
IssuesEvent
|
2022-02-06 02:21:25
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
closed
|
to be processed: 10.1111/syen.12521
|
process request
|
one more tonight that is in the news...
[systEnt.47.94-112.pdf](https://github.com/plazi/community/files/7935846/systEnt.47.94-112.pdf)
low level
tx
|
1.0
|
to be processed: 10.1111/syen.12521 - one more tonight that is in the news...
[systEnt.47.94-112.pdf](https://github.com/plazi/community/files/7935846/systEnt.47.94-112.pdf)
low level
tx
|
process
|
to be processed syen one more tonight that is in the news low level tx
| 1
|
11,222
| 14,003,700,501
|
IssuesEvent
|
2020-10-28 16:11:32
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
use share_memory_, Segmentation fault (core dumped)
|
high priority module: crash module: multiprocessing triaged
|
when I use share_memory_(), when append 65080, core dumped
```
import torch
if __name__ == '__main__':
buffers = []
for _ in range(65535):
print (_)
buffers.append(torch.empty(1, 1).share_memory_())
```
cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff
|
1.0
|
use share_memory_, Segmentation fault (core dumped) - when I use share_memory_(), when append 65080, core dumped
```
import torch
if __name__ == '__main__':
buffers = []
for _ in range(65535):
print (_)
buffers.append(torch.empty(1, 1).share_memory_())
```
cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff
|
process
|
use share memory segmentation fault core dumped when i use share memory when append core dumped import torch if name main buffers for in range print buffers append torch empty share memory cc ezyang gchanan bdhirsh heitorschueroff
| 1
|
164,456
| 20,364,484,121
|
IssuesEvent
|
2022-02-21 02:53:27
|
directoryxx/Laravel-UUID-CRUD-Login
|
https://api.github.com/repos/directoryxx/Laravel-UUID-CRUD-Login
|
opened
|
CVE-2022-0639 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /Laravel-UUID-CRUD-Login/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.1.2.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0639 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /Laravel-UUID-CRUD-Login/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.1.2.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file laravel uuid crud login package json path to vulnerable library node modules url parse package json dependency hierarchy laravel mix tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
| 0
|
2,364
| 5,166,548,613
|
IssuesEvent
|
2017-01-17 16:29:10
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
When there are not process definitions user should be notified when starting a new process.
|
browser: all bug comp: activiti-processList
|
1. Go to Processes.
2. Start a Process.
**Expected Results**
User should be notified that is unable to start a process when there are not available process definitions.
<img width="470" alt="screen shot 2017-01-17 at 16 07 55" src="https://cloud.githubusercontent.com/assets/24432311/22029104/67977590-dcd1-11e6-9ceb-91c79935168d.png">
**Actual Results**
User doesn't get notified and the "Start Process" window is available.
<img width="764" alt="screen shot 2017-01-17 at 16 07 38" src="https://cloud.githubusercontent.com/assets/24432311/22029198/b6040d10-dcd1-11e6-94bd-00d5cb32870f.png">
|
1.0
|
When there are not process definitions user should be notified when starting a new process. - 1. Go to Processes.
2. Start a Process.
**Expected Results**
User should be notified that is unable to start a process when there are not available process definitions.
<img width="470" alt="screen shot 2017-01-17 at 16 07 55" src="https://cloud.githubusercontent.com/assets/24432311/22029104/67977590-dcd1-11e6-9ceb-91c79935168d.png">
**Actual Results**
User doesn't get notified and the "Start Process" window is available.
<img width="764" alt="screen shot 2017-01-17 at 16 07 38" src="https://cloud.githubusercontent.com/assets/24432311/22029198/b6040d10-dcd1-11e6-94bd-00d5cb32870f.png">
|
process
|
when there are not process definitions user should be notified when starting a new process go to processes start a process expected results user should be notified that is unable to start a process when there are not available process definitions img width alt screen shot at src actual results user doesn t get notified and the start process window is available img width alt screen shot at src
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.