Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,824
| 11,938,361,421
|
IssuesEvent
|
2020-04-02 13:40:43
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Implement support of feedback task in BPMN process
|
area/process kind/analysis kind/user-story org/skd/sirius
|
## Description
The hypothesis now is that we will have a feedback task in the BPMN process for the Sirius scenario.
We would need to implement support for this.
@Febakke have created a [prototype](https://www.figma.com/proto/kq3yTAS4vifQgrNeTjL5d9/Sprintleveranser-2020?node-id=15%3A3&viewport=1357%2C-432%2C0.17018935084342957&scaling=min-zoom). This show the first draft of the feedback view

What happens here is that the view listens to changes on the instance and when feedback is given the service owner would change state on instance so that a new view is loaded. (there should not be any binding to what that view is, in the Sirius case this would be end state)
## Requirements
- Feedback documentation should be able to be uploaded to the instance by the org
- The feedback documents should be listed different than regular documents (TODO: Verify this requirement)
- It should be possible to have separate authorization rules for the feedback step.
- Agency should be able to send this process further in the process
## Consideration
- We would probably need to present som text to inform the users. This text should be configurable
- What other changes is needed for datatype. Should we support hidden feedback?
## Task:
- [x] Implement support of feedback task in backend. (need to define the next action, I am not sure if there is need for more)
- [x] Implement view of feedback. This should refresh state for some time. (define interval and timeout)
|
1.0
|
Implement support of feedback task in BPMN process - ## Description
The hypothesis now is that we will have a feedback task in the BPMN process for the Sirius scenario.
We would need to implement support for this.
@Febakke have created a [prototype](https://www.figma.com/proto/kq3yTAS4vifQgrNeTjL5d9/Sprintleveranser-2020?node-id=15%3A3&viewport=1357%2C-432%2C0.17018935084342957&scaling=min-zoom). This show the first draft of the feedback view

What happens here is that the view listens to changes on the instance and when feedback is given the service owner would change state on instance so that a new view is loaded. (there should not be any binding to what that view is, in the Sirius case this would be end state)
## Requirements
- Feedback documentation should be able to be uploaded to the instance by the org
- The feedback documents should be listed different than regular documents (TODO: Verify this requirement)
- It should be possible to have separate authorization rules for the feedback step.
- Agency should be able to send this process further in the process
## Consideration
- We would probably need to present som text to inform the users. This text should be configurable
- What other changes is needed for datatype. Should we support hidden feedback?
## Task:
- [x] Implement support of feedback task in backend. (need to define the next action, I am not sure if there is need for more)
- [x] Implement view of feedback. This should refresh state for some time. (define interval and timeout)
|
process
|
implement support of feedback task in bpmn process description the hypothesis now is that we will have a feedback task in the bpmn process for the sirius scenario we would need to implement support for this febakke have created a this show the first draft of the feedback view what happens here is that the view listens to changes on the instance and when feedback is given the service owner would change state on instance so that a new view is loaded there should not be any binding to what that view is in the sirius case this would be end state requirements feedback documentation should be able to be uploaded to the instance by the org the feedback documents should be listed different than regular documents todo verify this requirement it should be possible to have separate authorization rules for the feedback step agency should be able to send this process further in the process consideration we would probably need to present som text to inform the users this text should be configurable what other changes is needed for datatype should we support hidden feedback task implement support of feedback task in backend need to define the next action i am not sure if there is need for more implement view of feedback this should refresh state for some time define interval and timeout
| 1
|
17,204
| 22,779,893,569
|
IssuesEvent
|
2022-07-08 18:25:35
|
microsoft/react-native-windows
|
https://api.github.com/repos/microsoft/react-native-windows
|
closed
|
0.69 Release Status
|
enhancement Area: Release Process
|
## Checklist
**Before Preview**
- [x] Draft GitHub release notes from commit log (chiaramooney)
- [x] Promote canary build to preview using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
- [x] Push build to stable branch (chiaramooney)
- [x] Enable CI schedule for new branch of [CI pipeline](https://dev.azure.com/ms/react-native-windows/_apps/hub/ms.vss-ciworkflow.build-ci-hub?_a=edit-build-definition&id=468&view=Tab_Triggers)
- [x] Update [dashboard @ms](https://dev.azure.com/ms/react-native-windows/_dashboards/dashboard/28deb05d-f5bb-43e6-8aa9-36ad5e5476fb) with an entry for `CI ${version}`
- [x] Add release schedule for the new stable branch of [publish pipline](https://dev.azure.com/microsoft/ReactNative/_apps/hub/ms.vss-ciworkflow.build-ci-hub?_a=edit-build-definition&id=63081&view=Tab_Triggers)
- [x] Update [dashboard @microsoft](https://dev.azure.com/microsoft/ReactNative/_dashboards/dashboard/8ea77a11-83e2-493a-873f-9cd4562f213d) with an entry for `Publish ${version}`
- [x] Update GitHub release notes to use manually curated notes instead of a changelog (chiaramooney)
- [x] Post release notes internally (chiaramooney)
-----
**After Preview**
- [x] Move most issues targeting current release (chrisglein)
- [x] Test updated gallery app using [wiki instructions](https://github.com/microsoft/react-native-gallery/wiki/Manual-Validation-Steps-for-RNW-Release) (@TatianaKapos)
- [x] Check [CI Runs](https://github.com/microsoft/react-native-windows-samples/actions?query=workflow:*(Upgrade)) for Upgrading Sample Apps (@jonthysell )
- [x] Snap Hermes-Windows release (@mganandraj)
- [x] Do a pass on API Docs using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/API-documentation#validating-api-docs-for-a-release) (@chiaramooney)
- [x] Integrate any applicable patch/prerelease releases for React Native (@chiaramooney )
-----
**Before Release**
- [x] Ensure doc issues are addressed (chiaramooney) - Checked waiting on https://github.com/microsoft/react-native-windows-samples/pull/709
- [x] Promote `latest` build to `legacy` using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
-----
**Release**
- [x] Update preview release notes with any changes from cherry-picked PRs (chiaramooney)
- [x] Update samples (@TatianaKapos)
- [x] Update React Native Gallery and Publish (chiaramooney)
- [x] Promote `preview` build to `latest` using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
- [x] Update GitHub release notes to use manually curated notes instead of a changelog (chiaramooney)
- [x] Update website (chiaramooney)
- [x] Send out internal release announcement (chiaramooney)
- [x] Update CI to use /apiVersion 0.XX (chiaramooney) -- After Website Updated
|
1.0
|
0.69 Release Status - ## Checklist
**Before Preview**
- [x] Draft GitHub release notes from commit log (chiaramooney)
- [x] Promote canary build to preview using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
- [x] Push build to stable branch (chiaramooney)
- [x] Enable CI schedule for new branch of [CI pipeline](https://dev.azure.com/ms/react-native-windows/_apps/hub/ms.vss-ciworkflow.build-ci-hub?_a=edit-build-definition&id=468&view=Tab_Triggers)
- [x] Update [dashboard @ms](https://dev.azure.com/ms/react-native-windows/_dashboards/dashboard/28deb05d-f5bb-43e6-8aa9-36ad5e5476fb) with an entry for `CI ${version}`
- [x] Add release schedule for the new stable branch of [publish pipline](https://dev.azure.com/microsoft/ReactNative/_apps/hub/ms.vss-ciworkflow.build-ci-hub?_a=edit-build-definition&id=63081&view=Tab_Triggers)
- [x] Update [dashboard @microsoft](https://dev.azure.com/microsoft/ReactNative/_dashboards/dashboard/8ea77a11-83e2-493a-873f-9cd4562f213d) with an entry for `Publish ${version}`
- [x] Update GitHub release notes to use manually curated notes instead of a changelog (chiaramooney)
- [x] Post release notes internally (chiaramooney)
-----
**After Preview**
- [x] Move most issues targeting current release (chrisglein)
- [x] Test updated gallery app using [wiki instructions](https://github.com/microsoft/react-native-gallery/wiki/Manual-Validation-Steps-for-RNW-Release) (@TatianaKapos)
- [x] Check [CI Runs](https://github.com/microsoft/react-native-windows-samples/actions?query=workflow:*(Upgrade)) for Upgrading Sample Apps (@jonthysell )
- [x] Snap Hermes-Windows release (@mganandraj)
- [x] Do a pass on API Docs using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/API-documentation#validating-api-docs-for-a-release) (@chiaramooney)
- [x] Integrate any applicable patch/prerelease releases for React Native (@chiaramooney )
-----
**Before Release**
- [x] Ensure doc issues are addressed (chiaramooney) - Checked waiting on https://github.com/microsoft/react-native-windows-samples/pull/709
- [x] Promote `latest` build to `legacy` using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
-----
**Release**
- [x] Update preview release notes with any changes from cherry-picked PRs (chiaramooney)
- [x] Update samples (@TatianaKapos)
- [x] Update React Native Gallery and Publish (chiaramooney)
- [x] Promote `preview` build to `latest` using [wiki instructions](https://github.com/microsoft/react-native-windows/wiki/How-to-promote-a-release) (chiaramooney)
- [x] Update GitHub release notes to use manually curated notes instead of a changelog (chiaramooney)
- [x] Update website (chiaramooney)
- [x] Send out internal release announcement (chiaramooney)
- [x] Update CI to use /apiVersion 0.XX (chiaramooney) -- After Website Updated
|
process
|
release status checklist before preview draft github release notes from commit log chiaramooney promote canary build to preview using chiaramooney push build to stable branch chiaramooney enable ci schedule for new branch of update with an entry for ci version add release schedule for the new stable branch of update with an entry for publish version update github release notes to use manually curated notes instead of a changelog chiaramooney post release notes internally chiaramooney after preview move most issues targeting current release chrisglein test updated gallery app using tatianakapos check for upgrading sample apps jonthysell snap hermes windows release mganandraj do a pass on api docs using chiaramooney integrate any applicable patch prerelease releases for react native chiaramooney before release ensure doc issues are addressed chiaramooney checked waiting on promote latest build to legacy using chiaramooney release update preview release notes with any changes from cherry picked prs chiaramooney update samples tatianakapos update react native gallery and publish chiaramooney promote preview build to latest using chiaramooney update github release notes to use manually curated notes instead of a changelog chiaramooney update website chiaramooney send out internal release announcement chiaramooney update ci to use apiversion xx chiaramooney after website updated
| 1
|
33,227
| 7,680,852,989
|
IssuesEvent
|
2018-05-16 04:14:50
|
SkygearIO/features
|
https://api.github.com/repos/SkygearIO/features
|
opened
|
Improve syntax of relational query in client SDK
|
area/Cloud DB p/2 require/code require/guides require/spec section/Core
|
# Description
Currently it is a bit tricky to use relational query in Client side SDK, for example:
* JS: Need to wrap the SkyRecord with skygear.Reference skygear-SDK-JS#318
* Android: Need to query with Record ID instead of Record Object skygear-SDK-Android#162
We should improve it.
# API Design
**Remove this section if the feature have no API**
# Open Questions
**Put a list of open questions here before a complete design / specification is decided**
# Related Issues
- Server Issues
- Client Issues
- Guides Issues
|
1.0
|
Improve syntax of relational query in client SDK - # Description
Currently it is a bit tricky to use relational query in Client side SDK, for example:
* JS: Need to wrap the SkyRecord with skygear.Reference skygear-SDK-JS#318
* Android: Need to query with Record ID instead of Record Object skygear-SDK-Android#162
We should improve it.
# API Design
**Remove this section if the feature have no API**
# Open Questions
**Put a list of open questions here before a complete design / specification is decided**
# Related Issues
- Server Issues
- Client Issues
- Guides Issues
|
non_process
|
improve syntax of relational query in client sdk description currently it is a bit tricky to use relational query in client side sdk for example js need to wrap the skyrecord with skygear reference skygear sdk js android need to query with record id instead of record object skygear sdk android we should improve it api design remove this section if the feature have no api open questions put a list of open questions here before a complete design specification is decided related issues server issues client issues guides issues
| 0
|
21,081
| 3,672,330,882
|
IssuesEvent
|
2016-02-22 12:06:38
|
cgeo/cgeo
|
https://api.github.com/repos/cgeo/cgeo
|
closed
|
Visual feedback for logging activity
|
Field test Frontend Design
|
Some people do not know how to drop a TB because they do not see the list under the log text input field. Some sort of visual hint (aka loading circle) might help in this situation.
|
1.0
|
Visual feedback for logging activity - Some people do not know how to drop a TB because they do not see the list under the log text input field. Some sort of visual hint (aka loading circle) might help in this situation.
|
non_process
|
visual feedback for logging activity some people do not know how to drop a tb because they do not see the list under the log text input field some sort of visual hint aka loading circle might help in this situation
| 0
|
64,401
| 8,729,086,533
|
IssuesEvent
|
2018-12-10 19:14:17
|
PegaSysEng/pantheon
|
https://api.github.com/repos/PegaSysEng/pantheon
|
closed
|
Test new zip/repo install on Windows for onboarding webinar
|
documentation
|
## Requirements
- Test installing prequisites on Windows
- Test all steps of onboarding Webinar
|
1.0
|
Test new zip/repo install on Windows for onboarding webinar - ## Requirements
- Test installing prequisites on Windows
- Test all steps of onboarding Webinar
|
non_process
|
test new zip repo install on windows for onboarding webinar requirements test installing prequisites on windows test all steps of onboarding webinar
| 0
|
15,629
| 19,782,790,721
|
IssuesEvent
|
2022-01-18 00:15:30
|
emily-writes-poems/emily-writes-poems-processing
|
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
|
closed
|
display current feature
|
processing
|
show on app load? -> completed with #21
- [ ] refresh display if new feature is created
|
1.0
|
display current feature - show on app load? -> completed with #21
- [ ] refresh display if new feature is created
|
process
|
display current feature show on app load completed with refresh display if new feature is created
| 1
|
20,220
| 26,811,329,651
|
IssuesEvent
|
2023-02-01 22:41:09
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: Timed out retrying after 250ms: Target cannot be null or undefined.
|
process: flaky test topic: cross-origin ⤭ topic: flake ❄️ routed-to-e2e
|
### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/analytics/flaky-tests/56ff5c54-3c68-affc-d950-07a1afacd807-feea373b-9a0f-0ccd-2b49-8863d3c54097?branches=%5B%7B%22label%22%3A%22skip-or-fix-flaky-tests-2%22%2C%22suggested%22%3Afalse%2C%22value%22%3A%22skip-or-fix-flaky-tests-2%22%7D%5D&browsers=%5B%5D&chartRangeMostCommonErrors=%5B%5D&chartRangeSlowestTests=%5B%5D&chartRangeTopFailures=%5B%5D&committers=%5B%5D&cypressVersions=%5B%5D&flaky=%5B%5D&operatingSystems=%5B%5D&runGroups=%5B%5D&specFiles=%5B%5D&status=%5B%7B%22label%22%3A%22Passed%22%2C%22value%22%3A%22PASSED%22%7D%2C%7B%22label%22%3A%22Failed%22%2C%22value%22%3A%22FAILED%22%7D%5D&tags=%5B%5D&timeInterval=WEEK&timeRange=%7B%22startDate%22%3A%222022-08-18%22%2C%22endDate%22%3A%222022-08-18%22%7D&viewBy=TEST_CASE
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/e2e/origin/snapshots.cy.ts#L51
### Analysis
<img width="439" alt="Screen Shot 2022-08-18 at 10 26 43 AM" src="https://user-images.githubusercontent.com/26726429/185457491-1c84bfbd-42be-4f1d-a075-0e4f07d232f6.png">
Very rare flake
<img width="427" alt="Screen Shot 2022-08-18 at 10 26 02 AM" src="https://user-images.githubusercontent.com/26726429/185457303-ad6d8034-ef42-402d-80cd-cb4e7ea5dbce.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: Timed out retrying after 250ms: Target cannot be null or undefined. - ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/analytics/flaky-tests/56ff5c54-3c68-affc-d950-07a1afacd807-feea373b-9a0f-0ccd-2b49-8863d3c54097?branches=%5B%7B%22label%22%3A%22skip-or-fix-flaky-tests-2%22%2C%22suggested%22%3Afalse%2C%22value%22%3A%22skip-or-fix-flaky-tests-2%22%7D%5D&browsers=%5B%5D&chartRangeMostCommonErrors=%5B%5D&chartRangeSlowestTests=%5B%5D&chartRangeTopFailures=%5B%5D&committers=%5B%5D&cypressVersions=%5B%5D&flaky=%5B%5D&operatingSystems=%5B%5D&runGroups=%5B%5D&specFiles=%5B%5D&status=%5B%7B%22label%22%3A%22Passed%22%2C%22value%22%3A%22PASSED%22%7D%2C%7B%22label%22%3A%22Failed%22%2C%22value%22%3A%22FAILED%22%7D%5D&tags=%5B%5D&timeInterval=WEEK&timeRange=%7B%22startDate%22%3A%222022-08-18%22%2C%22endDate%22%3A%222022-08-18%22%7D&viewBy=TEST_CASE
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/e2e/origin/snapshots.cy.ts#L51
### Analysis
<img width="439" alt="Screen Shot 2022-08-18 at 10 26 43 AM" src="https://user-images.githubusercontent.com/26726429/185457491-1c84bfbd-42be-4f1d-a075-0e4f07d232f6.png">
Very rare flake
<img width="427" alt="Screen Shot 2022-08-18 at 10 26 02 AM" src="https://user-images.githubusercontent.com/26726429/185457303-ad6d8034-ef42-402d-80cd-cb4e7ea5dbce.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test timed out retrying after target cannot be null or undefined link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at am src very rare flake img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
1,284
| 3,822,440,474
|
IssuesEvent
|
2016-03-30 01:09:01
|
mapbox/mapbox-gl-js
|
https://api.github.com/repos/mapbox/mapbox-gl-js
|
closed
|
Replace drone.mp4/webm in the video test with something smaller
|
testing & release process
|
[This test](https://github.com/mapbox/mapbox-gl-js/blob/cb305e11d474fb1d30644eacd024def46156da1b/test/js/ui/map.test.js#L705) loads drone video from https://www.mapbox.com/drone/video/drone.mp4 and https://www.mapbox.com/drone/video/drone.webm, which takes around 22 seconds (slow!) and often times out (at least locally). We should find/make a simpler video for the test.
cc @bhousel @lucaswoj
|
1.0
|
Replace drone.mp4/webm in the video test with something smaller - [This test](https://github.com/mapbox/mapbox-gl-js/blob/cb305e11d474fb1d30644eacd024def46156da1b/test/js/ui/map.test.js#L705) loads drone video from https://www.mapbox.com/drone/video/drone.mp4 and https://www.mapbox.com/drone/video/drone.webm, which takes around 22 seconds (slow!) and often times out (at least locally). We should find/make a simpler video for the test.
cc @bhousel @lucaswoj
|
process
|
replace drone webm in the video test with something smaller loads drone video from and which takes around seconds slow and often times out at least locally we should find make a simpler video for the test cc bhousel lucaswoj
| 1
|
18,274
| 24,352,519,514
|
IssuesEvent
|
2022-10-03 02:31:33
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Multiprocessing resource tracker incorrectly checks pipe write length
|
type-bug stdlib expert-multiprocessing
|
https://github.com/python/cpython/blob/4781535a5796838fc4ce88e6e669e8907e426685/Lib/multiprocessing/resource_tracker.py#L164
I think this line should be:
```python
if len(msg) > 512:
```
|
1.0
|
Multiprocessing resource tracker incorrectly checks pipe write length - https://github.com/python/cpython/blob/4781535a5796838fc4ce88e6e669e8907e426685/Lib/multiprocessing/resource_tracker.py#L164
I think this line should be:
```python
if len(msg) > 512:
```
|
process
|
multiprocessing resource tracker incorrectly checks pipe write length i think this line should be python if len msg
| 1
|
7,520
| 10,596,934,177
|
IssuesEvent
|
2019-10-09 22:40:20
|
Shothogun/Hypothetical-assembly-Assembler
|
https://api.github.com/repos/Shothogun/Hypothetical-assembly-Assembler
|
closed
|
Estruturas de Dados
|
Assembler Preprocess Module
|
## Integrante
Danilo
## Implementação
Implementação das tabelas de instruçõese e simbolos e de uma estrutura de log de erros
## Descrição
As tabelas de instruções e simbolos serão implementadas na forma de classes, que terão internamente maps das instruções/simbolos (string) para vetores de inteiros com os demais dados da tabela. O log de erros será implementado na forma de uma classe base error e uma entidade error_log que conterá a relação dos erros encontrados.
## Detalhes
- A tabela de instruções contém o opcode e o número de operandos de cada instrução
- A tabela de simbolos contém o valor, um indicador de definição (bool) e o endereço para uma lista de simbolos indefinidos.
- A classe error contém um código de erro, o tipo de erro, o número e o conteúdo da linha em que o erro ocorre
- A classe error_log contém uma lista de objetos do tipo error
## Pendências
- Implementação da classe error
- Implementação da classe error_log
|
1.0
|
Estruturas de Dados - ## Integrante
Danilo
## Implementação
Implementação das tabelas de instruçõese e simbolos e de uma estrutura de log de erros
## Descrição
As tabelas de instruções e simbolos serão implementadas na forma de classes, que terão internamente maps das instruções/simbolos (string) para vetores de inteiros com os demais dados da tabela. O log de erros será implementado na forma de uma classe base error e uma entidade error_log que conterá a relação dos erros encontrados.
## Detalhes
- A tabela de instruções contém o opcode e o número de operandos de cada instrução
- A tabela de simbolos contém o valor, um indicador de definição (bool) e o endereço para uma lista de simbolos indefinidos.
- A classe error contém um código de erro, o tipo de erro, o número e o conteúdo da linha em que o erro ocorre
- A classe error_log contém uma lista de objetos do tipo error
## Pendências
- Implementação da classe error
- Implementação da classe error_log
|
process
|
estruturas de dados integrante danilo implementação implementação das tabelas de instruçõese e simbolos e de uma estrutura de log de erros descrição as tabelas de instruções e simbolos serão implementadas na forma de classes que terão internamente maps das instruções simbolos string para vetores de inteiros com os demais dados da tabela o log de erros será implementado na forma de uma classe base error e uma entidade error log que conterá a relação dos erros encontrados detalhes a tabela de instruções contém o opcode e o número de operandos de cada instrução a tabela de simbolos contém o valor um indicador de definição bool e o endereço para uma lista de simbolos indefinidos a classe error contém um código de erro o tipo de erro o número e o conteúdo da linha em que o erro ocorre a classe error log contém uma lista de objetos do tipo error pendências implementação da classe error implementação da classe error log
| 1
|
508,818
| 14,706,296,615
|
IssuesEvent
|
2021-01-04 19:36:27
|
psu-stewardship/scholarsphere
|
https://api.github.com/repos/psu-stewardship/scholarsphere
|
closed
|
Delete associated legacy identifiers when deleting works
|
bug low priority
|
If a work has any legacy identifiers associated with it, the legacy id for the work is deleted when the work is deleted, but not any legacy ids for associated FileResource objects. The FileResource model has `dependent: :destroy` so it ought to work, but doesn't.
This only affect re-migrating previous works if they've been deleted. For now, it can be done manually, if we delete the identifiers ourselves.
|
1.0
|
Delete associated legacy identifiers when deleting works - If a work has any legacy identifiers associated with it, the legacy id for the work is deleted when the work is deleted, but not any legacy ids for associated FileResource objects. The FileResource model has `dependent: :destroy` so it ought to work, but doesn't.
This only affect re-migrating previous works if they've been deleted. For now, it can be done manually, if we delete the identifiers ourselves.
|
non_process
|
delete associated legacy identifiers when deleting works if a work has any legacy identifiers associated with it the legacy id for the work is deleted when the work is deleted but not any legacy ids for associated fileresource objects the fileresource model has dependent destroy so it ought to work but doesn t this only affect re migrating previous works if they ve been deleted for now it can be done manually if we delete the identifiers ourselves
| 0
|
694,239
| 23,807,250,337
|
IssuesEvent
|
2022-09-04 08:15:15
|
EduMIPS64/edumips64
|
https://api.github.com/repos/EduMIPS64/edumips64
|
opened
|
Clean up RegisterFP
|
type:internal-cleanup component:core priority:1
|
There is code duplication in `RegisterFP`, as it doesn't inherit from `Register`.
We should move BitSet64FP features to BitSet and make RegisterFP extend Register. This will allow us not to duplicate code for the upcoming delayed unlock feature necessary for #702.
|
1.0
|
Clean up RegisterFP - There is code duplication in `RegisterFP`, as it doesn't inherit from `Register`.
We should move BitSet64FP features to BitSet and make RegisterFP extend Register. This will allow us not to duplicate code for the upcoming delayed unlock feature necessary for #702.
|
non_process
|
clean up registerfp there is code duplication in registerfp as it doesn t inherit from register we should move features to bitset and make registerfp extend register this will allow us not to duplicate code for the upcoming delayed unlock feature necessary for
| 0
|
22,207
| 30,761,285,968
|
IssuesEvent
|
2023-07-29 18:35:27
|
AlphaTiles/AlphaTiles
|
https://api.github.com/repos/AlphaTiles/AlphaTiles
|
opened
|
Set font programmatically, rather than manually, for each different font
|
process
|
**Is your developer improvement related to a problem? Please describe.**
If a product flavor uses a different font than charissil, a developer must manually change the fontfamily item in styles.xml to match what is in the product flavor's font folder.
**Describe the solution you'd like**
It would be great if we could run something on the files prior to building that would modify styles.xml for us if building a non-Charis app. Changes to styles.xml, of course, should not be committed since we want the default to be Charis.
|
1.0
|
Set font programmatically, rather than manually, for each different font - **Is your developer improvement related to a problem? Please describe.**
If a product flavor uses a different font than charissil, a developer must manually change the fontfamily item in styles.xml to match what is in the product flavor's font folder.
**Describe the solution you'd like**
It would be great if we could run something on the files prior to building that would modify styles.xml for us if building a non-Charis app. Changes to styles.xml, of course, should not be committed since we want the default to be Charis.
|
process
|
set font programmatically rather than manually for each different font is your developer improvement related to a problem please describe if a product flavor uses a different font than charissil a developer must manually change the fontfamily item in styles xml to match what is in the product flavor s font folder describe the solution you d like it would be great if we could run something on the files prior to building that would modify styles xml for us if building a non charis app changes to styles xml of course should not be committed since we want the default to be charis
| 1
|
508,101
| 14,689,949,261
|
IssuesEvent
|
2021-01-02 12:46:24
|
eknoes/covid-bot
|
https://api.github.com/repos/eknoes/covid-bot
|
closed
|
Sortierung der Orte im täglichen Bericht
|
enhancement high priority
|
Derzeit scheinen die Orte nach dem Zeitpunkt des Abonnements sortiert zu werden (Neueste zuerst). Sinnvoller ist eine alphabetische Sortierung, würde ich sagen. Was meinst du?
|
1.0
|
Sortierung der Orte im täglichen Bericht - Derzeit scheinen die Orte nach dem Zeitpunkt des Abonnements sortiert zu werden (Neueste zuerst). Sinnvoller ist eine alphabetische Sortierung, würde ich sagen. Was meinst du?
|
non_process
|
sortierung der orte im täglichen bericht derzeit scheinen die orte nach dem zeitpunkt des abonnements sortiert zu werden neueste zuerst sinnvoller ist eine alphabetische sortierung würde ich sagen was meinst du
| 0
|
758
| 3,237,399,821
|
IssuesEvent
|
2015-10-14 11:41:05
|
superroma/testcafe-hammerhead
|
https://api.github.com/repos/superroma/testcafe-hammerhead
|
opened
|
DomProcessor.processPage refactoring
|
!IMPORTANT! AREA: server COMPLEXITY: easy SYSTEM: resource processing TYPE: enhancement
|
Try to avoid multiple element processing
Code fragment:
```javascript
var $all = $('*');
for (var i = 0; i < this.elementProcessorPatterns.length; i++) {
var pattern = this.elementProcessorPatterns[i];
/*eslint-disable no-loop-func*/
$all.filter(function () {
return pattern.selector(this);
}).each(function () {
if (!this[CONST.ELEMENT_PROCESSED_FLAG]) {
for (var j = 0; j < pattern.elementProcessors.length; j++)
pattern.elementProcessors[j].call(domProc, this, replacer, pattern);
}
});
/*eslint-enable no-loop-func*/
}
```
Need to provide perfomance check results
|
1.0
|
DomProcessor.processPage refactoring - Try to avoid multiple element processing
Code fragment:
```javascript
var $all = $('*');
for (var i = 0; i < this.elementProcessorPatterns.length; i++) {
var pattern = this.elementProcessorPatterns[i];
/*eslint-disable no-loop-func*/
$all.filter(function () {
return pattern.selector(this);
}).each(function () {
if (!this[CONST.ELEMENT_PROCESSED_FLAG]) {
for (var j = 0; j < pattern.elementProcessors.length; j++)
pattern.elementProcessors[j].call(domProc, this, replacer, pattern);
}
});
/*eslint-enable no-loop-func*/
}
```
Need to provide perfomance check results
|
process
|
domprocessor processpage refactoring try to avoid multiple element processing code fragment javascript var all for var i i this elementprocessorpatterns length i var pattern this elementprocessorpatterns eslint disable no loop func all filter function return pattern selector this each function if this for var j j pattern elementprocessors length j pattern elementprocessors call domproc this replacer pattern eslint enable no loop func need to provide perfomance check results
| 1
|
801,683
| 28,498,380,956
|
IssuesEvent
|
2023-04-18 15:34:01
|
pepkit/pephub
|
https://api.github.com/repos/pepkit/pephub
|
opened
|
Assign schemas to PEPs
|
enhancement priority low
|
This is related to #94. Wouldn't it be cool if when creating a PEP (or uploading a PEP), one could assign it to a specific schema? Something like this:

This would require updates to a few packages, but I think could be super powerful. One could then filter/search for PEPs on those compatible with the given schema. It would also really enhance the metadata builder UI.
|
1.0
|
Assign schemas to PEPs - This is related to #94. Wouldn't it be cool if when creating a PEP (or uploading a PEP), one could assign it to a specific schema? Something like this:

This would require updates to a few packages, but I think could be super powerful. One could then filter/search for PEPs on those compatible with the given schema. It would also really enhance the metadata builder UI.
|
non_process
|
assign schemas to peps this is related to wouldn t it be cool if when creating a pep or uploading a pep one could assign it to a specific schema something like this this would require updates to a few packages but i think could be super powerful one could then filter search for peps on those compatible with the given schema it would also really enhance the metadata builder ui
| 0
|
64,290
| 6,898,872,598
|
IssuesEvent
|
2017-11-24 11:15:55
|
edenlabllc/ehealth.api
|
https://api.github.com/repos/edenlabllc/ehealth.api
|
closed
|
WS: Create Token (Modification)
|
BE epic/Auth kind/task priority/high project/2factor_auth status/test
|
WS: Create Token (Modification)
- [x] Create WS [spec](https://edenlab.atlassian.net/wiki/x/aAAZCQ)
- [x] Implement
- [x] Deploy
- [x] Improve test scenarios
|
1.0
|
WS: Create Token (Modification) - WS: Create Token (Modification)
- [x] Create WS [spec](https://edenlab.atlassian.net/wiki/x/aAAZCQ)
- [x] Implement
- [x] Deploy
- [x] Improve test scenarios
|
non_process
|
ws create token modification ws create token modification create ws implement deploy improve test scenarios
| 0
|
12,310
| 14,859,802,603
|
IssuesEvent
|
2021-01-18 19:12:44
|
neuropoly/ukbiobank-spinalcord-csa
|
https://api.github.com/repos/neuropoly/ukbiobank-spinalcord-csa
|
closed
|
Change file _seg_labeled_discs.nii.gz in FILES_TO_CHECK list
|
process_data
|
In process_data.sh, existence of next files is verified:
https://github.com/sandrinebedard/Projet3/blob/634efcba267163ce53b1b820e64cf4ef90e69d78/process_data.sh#L155-L162
After applying manual corrections for disc labeling, the file `${SUBJECT}_T1w_RPI_r_gradcorr_seg_labeled_discs.nii.gz` will not exist, it is only used if label does not already exists: https://github.com/sandrinebedard/Projet3/blob/634efcba267163ce53b1b820e64cf4ef90e69d78/process_data.sh#L47
Should be `${SUBJECT}_T1w_RPI_r_gradcorr_labels.nii.gz` in `FILES_TO_CHECK` instead.
Same thing for `PAM50_levels2${SUBJECT}_T2w_RPI_r_gradcorr.nii.gz`, will not exist if manual disc label for T2w is used instead.
|
1.0
|
Change file _seg_labeled_discs.nii.gz in FILES_TO_CHECK list - In process_data.sh, existence of next files is verified:
https://github.com/sandrinebedard/Projet3/blob/634efcba267163ce53b1b820e64cf4ef90e69d78/process_data.sh#L155-L162
After applying manual corrections for disc labeling, the file `${SUBJECT}_T1w_RPI_r_gradcorr_seg_labeled_discs.nii.gz` will not exist, it is only used if label does not already exists: https://github.com/sandrinebedard/Projet3/blob/634efcba267163ce53b1b820e64cf4ef90e69d78/process_data.sh#L47
Should be `${SUBJECT}_T1w_RPI_r_gradcorr_labels.nii.gz` in `FILES_TO_CHECK` instead.
Same thing for `PAM50_levels2${SUBJECT}_T2w_RPI_r_gradcorr.nii.gz`, will not exist if manual disc label for T2w is used instead.
|
process
|
change file seg labeled discs nii gz in files to check list in process data sh existence of next files is verified after applying manual corrections for disc labeling the file subject rpi r gradcorr seg labeled discs nii gz will not exist it is only used if label does not already exists should be subject rpi r gradcorr labels nii gz in files to check instead same thing for subject rpi r gradcorr nii gz will not exist if manual disc label for is used instead
| 1
|
16,391
| 21,158,594,531
|
IssuesEvent
|
2022-04-07 07:15:09
|
goblint/analyzer
|
https://api.github.com/repos/goblint/analyzer
|
closed
|
Compilation Database: Support for entries with multiple `.c` files in one command
|
benchmarking preprocessing
|
@stilscher discovered this issue with older versions of `zstd` (e.g. `5c5c47633826426a3fcc717000a352d52bd0ac22`).
The complication database in this case contains entries where the command specifies several `.c` files at the same time.
~~~JSON
{
"directory": "/home/michael/Documents/bench-repos/zstd/programs",
"arguments": [
"cc",
"-I../lib",
"-I../lib/common",
"-I../lib/compress",
"-I../lib/dictBuilder",
"-DXXH_NAMESPACE=ZSTD_",
"-I../lib/legacy",
"-DZSTD_MULTITHREAD",
// more defines
"-DZSTD_NO_INTRINSICS",
"-O3",
"-Wall",
"-Wextra",
// more warnings
"-pthread",
"-lz",
"-llzma",
"../lib/common/debug.c",
"../lib/common/entropy_common.c",
// more files
"../lib/dictBuilder/zdict.c",
"zstdcli.o",
"util.o",
"fileio.o",
"bench.o",
"datagen.o",
"dibio.o",
"-o",
"zstd",
"-pthread",
"-lz",
"-llzma"
],
"file": "../lib/dictBuilder/zdict.c"
}
~~~
Goblint in this case adds an `-E` flag and `-o` to get preprocessor output. The preprocessor does not like this, and fails with
```
cc: fatal error: cannot specify -o with -c, -S or -E with multiple files
compilation terminated.
```
|
1.0
|
Compilation Database: Support for entries with multiple `.c` files in one command - @stilscher discovered this issue with older versions of `zstd` (e.g. `5c5c47633826426a3fcc717000a352d52bd0ac22`).
The complication database in this case contains entries where the command specifies several `.c` files at the same time.
~~~JSON
{
"directory": "/home/michael/Documents/bench-repos/zstd/programs",
"arguments": [
"cc",
"-I../lib",
"-I../lib/common",
"-I../lib/compress",
"-I../lib/dictBuilder",
"-DXXH_NAMESPACE=ZSTD_",
"-I../lib/legacy",
"-DZSTD_MULTITHREAD",
// more defines
"-DZSTD_NO_INTRINSICS",
"-O3",
"-Wall",
"-Wextra",
// more warnings
"-pthread",
"-lz",
"-llzma",
"../lib/common/debug.c",
"../lib/common/entropy_common.c",
// more files
"../lib/dictBuilder/zdict.c",
"zstdcli.o",
"util.o",
"fileio.o",
"bench.o",
"datagen.o",
"dibio.o",
"-o",
"zstd",
"-pthread",
"-lz",
"-llzma"
],
"file": "../lib/dictBuilder/zdict.c"
}
~~~
Goblint in this case adds an `-E` flag and `-o` to get preprocessor output. The preprocessor does not like this, and fails with
```
cc: fatal error: cannot specify -o with -c, -S or -E with multiple files
compilation terminated.
```
|
process
|
compilation database support for entries with multiple c files in one command stilscher discovered this issue with older versions of zstd e g the complication database in this case contains entries where the command specifies several c files at the same time json directory home michael documents bench repos zstd programs arguments cc i lib i lib common i lib compress i lib dictbuilder dxxh namespace zstd i lib legacy dzstd multithread more defines dzstd no intrinsics wall wextra more warnings pthread lz llzma lib common debug c lib common entropy common c more files lib dictbuilder zdict c zstdcli o util o fileio o bench o datagen o dibio o o zstd pthread lz llzma file lib dictbuilder zdict c goblint in this case adds an e flag and o to get preprocessor output the preprocessor does not like this and fails with cc fatal error cannot specify o with c s or e with multiple files compilation terminated
| 1
|
6,433
| 9,533,490,008
|
IssuesEvent
|
2019-04-29 21:26:58
|
google/gvisor
|
https://api.github.com/repos/google/gvisor
|
closed
|
Update copyright notices to "The gVisor Authors"
|
area: docs priority: p2 process
|
Per https://opensource.google.com/docs/releasing/authors/, we may consider changing copyright notices from "Google LLC" to "The gVisor Authors" once we have significant non-Google contributors. We've been receiving lots of such contributions recently, so I think it is time to make this change to acknowledge all of these authors.
|
1.0
|
Update copyright notices to "The gVisor Authors" - Per https://opensource.google.com/docs/releasing/authors/, we may consider changing copyright notices from "Google LLC" to "The gVisor Authors" once we have significant non-Google contributors. We've been receiving lots of such contributions recently, so I think it is time to make this change to acknowledge all of these authors.
|
process
|
update copyright notices to the gvisor authors per we may consider changing copyright notices from google llc to the gvisor authors once we have significant non google contributors we ve been receiving lots of such contributions recently so i think it is time to make this change to acknowledge all of these authors
| 1
|
18,173
| 24,216,954,036
|
IssuesEvent
|
2022-09-26 07:40:11
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?] ccsu.edu
|
whitelisting process
|
**Domains or links**
```
web.ccsu.edu
www.ccsu.edu
www1.ccsu.edu
www2.ccsu.edu
chortle.ccsu.edu
```
**More Information**
domain blocked by UHB dns.
**Have you requested removal from other sources?**
No, as does not belong to any blacklist
|
1.0
|
[FALSE-POSITIVE?] ccsu.edu - **Domains or links**
```
web.ccsu.edu
www.ccsu.edu
www1.ccsu.edu
www2.ccsu.edu
chortle.ccsu.edu
```
**More Information**
domain blocked by UHB dns.
**Have you requested removal from other sources?**
No, as does not belong to any blacklist
|
process
|
ccsu edu domains or links web ccsu edu ccsu edu ccsu edu chortle ccsu edu more information domain blocked by uhb dns have you requested removal from other sources no as does not belong to any blacklist
| 1
|
339,181
| 30,349,939,989
|
IssuesEvent
|
2023-07-11 18:08:53
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[BUG] Embedded Rancher Crashes v1.2-head, "All Namespaces" -> Apps -> Charts: `t.version is undefined`
|
kind/bug area/ui severity/1 regression reproduce/always not-require/test-plan
|
**Describe the bug**
Web UI Crashes in Embedded Rancher's Apps -> Charts (w/ "All Namespaces" selected)
https://github.com/harvester/harvester/assets/5370752/7a944607-a7d3-4269-9635-6b89a650b7af
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Support -> Access Embedded Rancher -> Select "All Namespaces" -> Apps -> Charts
1. Web UI will crash `t.version is undefined`
**Expected behavior**
Apps->Charts, not to crash the entire Web UI
**Environment**
- Harvester ISO Version: v1.2-56e9844d-head
- Underlying Infrastructure: R720
**Additional context**
Error Displayed:
```
t.version is undefined
```
|
1.0
|
[BUG] Embedded Rancher Crashes v1.2-head, "All Namespaces" -> Apps -> Charts: `t.version is undefined` - **Describe the bug**
Web UI Crashes in Embedded Rancher's Apps -> Charts (w/ "All Namespaces" selected)
https://github.com/harvester/harvester/assets/5370752/7a944607-a7d3-4269-9635-6b89a650b7af
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Support -> Access Embedded Rancher -> Select "All Namespaces" -> Apps -> Charts
1. Web UI will crash `t.version is undefined`
**Expected behavior**
Apps->Charts, not to crash the entire Web UI
**Environment**
- Harvester ISO Version: v1.2-56e9844d-head
- Underlying Infrastructure: R720
**Additional context**
Error Displayed:
```
t.version is undefined
```
|
non_process
|
embedded rancher crashes head all namespaces apps charts t version is undefined describe the bug web ui crashes in embedded rancher s apps charts w all namespaces selected to reproduce steps to reproduce the behavior go to support access embedded rancher select all namespaces apps charts web ui will crash t version is undefined expected behavior apps charts not to crash the entire web ui environment harvester iso version head underlying infrastructure additional context error displayed t version is undefined
| 0
|
7,749
| 10,864,322,431
|
IssuesEvent
|
2019-11-14 16:39:27
|
threefoldtech/jumpscaleX_core
|
https://api.github.com/repos/threefoldtech/jumpscaleX_core
|
closed
|
Adding multiple packages results in conflicts
|
priority_major process_wontfix type_bug
|
When trying to install tfgrid_directory and farmmanagement on the same system, only the first one installed works. Is it possible that the nginx routing is broken when installing multiple packages?
## Install tfgrid first
cl = j.servers.threebot.local_start_default(web=True)
cl.actors.package_manager.package_add(path='/sandbox/code/github/threefoldtech/jumpscaleX_threebot/ThreebotPackages/tfgrid/tfgrid_directory/')
cl.actors.package_manager.package_add(path='/sandbox/code/github/threebotapps/farmmanagement')
POST https://172.17.0.2/web/gedis/http/farms/owned_by -> Works
GET https://172.17.0.2/farmmanagement/store.js -> loads the index.html from webinterface
## Install farmmanagement first
cl = j.servers.threebot.local_start_default(web=True)
cl.actors.package_manager.package_add(path='/sandbox/code/github/threebotapps/farmmanagement')
cl.actors.package_manager.package_add(path='/sandbox/code/github/threefoldtech/jumpscaleX_threebot/ThreebotPackages/tfgrid/tfgrid_directory/')
POST https://172.17.0.2/web/gedis/http/farms/owned_by -> Actor farms does not exist
GET https://172.17.0.2/farmmanagement/store.js -> correctly loads the farmmanagement store.js
Farmmanagement: https://github.com/threebotapps/farmmanagement
TFGrid: https://github.com/threefoldtech/jumpscaleX_threebot/tree/development/ThreeBotPackages/tfgrid
|
1.0
|
Adding multiple packages results in conflicts - When trying to install tfgrid_directory and farmmanagement on the same system, only the first one installed works. Is it possible that the nginx routing is broken when installing multiple packages?
## Install tfgrid first
cl = j.servers.threebot.local_start_default(web=True)
cl.actors.package_manager.package_add(path='/sandbox/code/github/threefoldtech/jumpscaleX_threebot/ThreebotPackages/tfgrid/tfgrid_directory/')
cl.actors.package_manager.package_add(path='/sandbox/code/github/threebotapps/farmmanagement')
POST https://172.17.0.2/web/gedis/http/farms/owned_by -> Works
GET https://172.17.0.2/farmmanagement/store.js -> loads the index.html from webinterface
## Install farmmanagement first
cl = j.servers.threebot.local_start_default(web=True)
cl.actors.package_manager.package_add(path='/sandbox/code/github/threebotapps/farmmanagement')
cl.actors.package_manager.package_add(path='/sandbox/code/github/threefoldtech/jumpscaleX_threebot/ThreebotPackages/tfgrid/tfgrid_directory/')
POST https://172.17.0.2/web/gedis/http/farms/owned_by -> Actor farms does not exist
GET https://172.17.0.2/farmmanagement/store.js -> correctly loads the farmmanagement store.js
Farmmanagement: https://github.com/threebotapps/farmmanagement
TFGrid: https://github.com/threefoldtech/jumpscaleX_threebot/tree/development/ThreeBotPackages/tfgrid
|
process
|
adding multiple packages results in conflicts when trying to install tfgrid directory and farmmanagement on the same system only the first one installed works is it possible that the nginx routing is broken when installing multiple packages install tfgrid first cl j servers threebot local start default web true cl actors package manager package add path sandbox code github threefoldtech jumpscalex threebot threebotpackages tfgrid tfgrid directory cl actors package manager package add path sandbox code github threebotapps farmmanagement post works get loads the index html from webinterface install farmmanagement first cl j servers threebot local start default web true cl actors package manager package add path sandbox code github threebotapps farmmanagement cl actors package manager package add path sandbox code github threefoldtech jumpscalex threebot threebotpackages tfgrid tfgrid directory post actor farms does not exist get correctly loads the farmmanagement store js farmmanagement tfgrid
| 1
|
7,037
| 10,197,012,743
|
IssuesEvent
|
2019-08-12 22:31:09
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Code Inspection some comment text resolves as variable
|
antlr bug edge-case feature-annotations parse-tree-processing
|
Open a new MS Word document, put this code into ThisDocument:
```
' PURPOSE:
' RD False Positive
Private Sub SomeTest()
'@[RD] NotA Variable
Debug.Print vbNullString
End Sub
```
See image below:

Word 2010, 64 Bit App
Win 10, 64 Bit
RD ver: v2.1.2047
|
1.0
|
Code Inspection some comment text resolves as variable - Open a new MS Word document, put this code into ThisDocument:
```
' PURPOSE:
' RD False Positive
Private Sub SomeTest()
'@[RD] NotA Variable
Debug.Print vbNullString
End Sub
```
See image below:

Word 2010, 64 Bit App
Win 10, 64 Bit
RD ver: v2.1.2047
|
process
|
code inspection some comment text resolves as variable open a new ms word document put this code into thisdocument purpose rd false positive private sub sometest nota variable debug print vbnullstring end sub see image below word bit app win bit rd ver
| 1
|
18,019
| 24,032,776,049
|
IssuesEvent
|
2022-09-15 16:18:35
|
googleapis/java-beyondcorp-clientgateways
|
https://api.github.com/repos/googleapis/java-beyondcorp-clientgateways
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'beyondcorp-clientgateways' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'beyondcorp-clientgateways' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname beyondcorp clientgateways invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
404,041
| 27,447,558,908
|
IssuesEvent
|
2023-03-02 15:14:14
|
pharmaverse/admiralophtha
|
https://api.github.com/repos/pharmaverse/admiralophtha
|
closed
|
Documentation: add all authors to DESCRIPTION file
|
documentation Release 1
|
### Please select a category the issue is focused on?
_No response_
### Let us know where something needs a refresh or put your idea here!
add all authors to DESCRIPTION file
|
1.0
|
Documentation: add all authors to DESCRIPTION file - ### Please select a category the issue is focused on?
_No response_
### Let us know where something needs a refresh or put your idea here!
add all authors to DESCRIPTION file
|
non_process
|
documentation add all authors to description file please select a category the issue is focused on no response let us know where something needs a refresh or put your idea here add all authors to description file
| 0
|
325,188
| 24,038,728,496
|
IssuesEvent
|
2022-09-15 22:02:27
|
matplotlib/matplotlib
|
https://api.github.com/repos/matplotlib/matplotlib
|
opened
|
[Doc]: add old sphinx tutorial to our devdocs
|
Documentation
|
### Documentation Link
_No response_
### Problem
We currently have an interesting tutorial at https://matplotlib.org/sampledoc/index.html. However, it is really old and wrong in places (https://github.com/matplotlib/sampledoc/issues/26) and we think some of this should be in the main developer docs. The todo here is to maybe condense and move this sphinx how-to to the main developer documentation. Probably not too hard, but whoever does it should have some sphinx familiarity and check that it works.
### Suggested improvement
_No response_
|
1.0
|
[Doc]: add old sphinx tutorial to our devdocs - ### Documentation Link
_No response_
### Problem
We currently have an interesting tutorial at https://matplotlib.org/sampledoc/index.html. However, it is really old and wrong in places (https://github.com/matplotlib/sampledoc/issues/26) and we think some of this should be in the main developer docs. The todo here is to maybe condense and move this sphinx how-to to the main developer documentation. Probably not too hard, but whoever does it should have some sphinx familiarity and check that it works.
### Suggested improvement
_No response_
|
non_process
|
add old sphinx tutorial to our devdocs documentation link no response problem we currently have an interesting tutorial at however it is really old and wrong in places and we think some of this should be in the main developer docs the todo here is to maybe condense and move this sphinx how to to the main developer documentation probably not too hard but whoever does it should have some sphinx familiarity and check that it works suggested improvement no response
| 0
|
2,570
| 5,325,652,468
|
IssuesEvent
|
2017-02-15 00:30:22
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [german] Il faut faire tomber le mur de l'argent
|
Process: [0] Awaiting subtitles
|
# Video title
Il faut faire tomber le mur de l'argent
# URL
https://www.youtube.com/watch?v=zXWCA8dV8os&t=4s
# Youtube subtitles language
German
# Duration
3:55
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=zXWCA8dV8os&ui=hd&action_mde_edit_form=1&ref=player&lang=de&bl=vmp&tab=captions
|
1.0
|
[subtitles] [german] Il faut faire tomber le mur de l'argent - # Video title
Il faut faire tomber le mur de l'argent
# URL
https://www.youtube.com/watch?v=zXWCA8dV8os&t=4s
# Youtube subtitles language
German
# Duration
3:55
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=zXWCA8dV8os&ui=hd&action_mde_edit_form=1&ref=player&lang=de&bl=vmp&tab=captions
|
process
|
il faut faire tomber le mur de l argent video title il faut faire tomber le mur de l argent url youtube subtitles language german duration subtitles url
| 1
|
438,137
| 30,627,028,089
|
IssuesEvent
|
2023-07-24 12:15:28
|
Redocly/redocly-cli
|
https://api.github.com/repos/Redocly/redocly-cli
|
closed
|
[Docs] Inconsistent tables in OpenAPI CLI command docs
|
documentation good first issue Type: Docs
|
**Is your feature request related to a problem? Please describe.**
We've recently simplified the **Options** table for [the bundle command](https://redoc.ly/docs/cli/commands/bundle/) by reducing the amount of columns and moving the relevant information from deleted columns into the **Description** column.
To make our documentation consistent, we need to do this for all the other OpenAPI CLI commands; namely:
- [join](https://redoc.ly/docs/cli/commands/join/)
- [lint](https://redoc.ly/docs/cli/commands/lint/)
- [login](https://redoc.ly/docs/cli/commands/login/)
- [logout](https://redoc.ly/docs/cli/commands/logout/)
- [preview-docs](https://redoc.ly/docs/cli/commands/preview-docs/)
- [push](https://redoc.ly/docs/cli/commands/push/)
- [split](https://redoc.ly/docs/cli/commands/split/)
- [stats](https://redoc.ly/docs/cli/commands/stats/)
**Describe the solution you'd like**
Every command page should have the **Options** table formatted the same way as the `bundle` page currently has.
**Additional context**
**This task is reserved for our OSS tech writer.** Please do not (re)assign it to anyone else.
|
1.0
|
[Docs] Inconsistent tables in OpenAPI CLI command docs - **Is your feature request related to a problem? Please describe.**
We've recently simplified the **Options** table for [the bundle command](https://redoc.ly/docs/cli/commands/bundle/) by reducing the amount of columns and moving the relevant information from deleted columns into the **Description** column.
To make our documentation consistent, we need to do this for all the other OpenAPI CLI commands; namely:
- [join](https://redoc.ly/docs/cli/commands/join/)
- [lint](https://redoc.ly/docs/cli/commands/lint/)
- [login](https://redoc.ly/docs/cli/commands/login/)
- [logout](https://redoc.ly/docs/cli/commands/logout/)
- [preview-docs](https://redoc.ly/docs/cli/commands/preview-docs/)
- [push](https://redoc.ly/docs/cli/commands/push/)
- [split](https://redoc.ly/docs/cli/commands/split/)
- [stats](https://redoc.ly/docs/cli/commands/stats/)
**Describe the solution you'd like**
Every command page should have the **Options** table formatted the same way as the `bundle` page currently has.
**Additional context**
**This task is reserved for our OSS tech writer.** Please do not (re)assign it to anyone else.
|
non_process
|
inconsistent tables in openapi cli command docs is your feature request related to a problem please describe we ve recently simplified the options table for by reducing the amount of columns and moving the relevant information from deleted columns into the description column to make our documentation consistent we need to do this for all the other openapi cli commands namely describe the solution you d like every command page should have the options table formatted the same way as the bundle page currently has additional context this task is reserved for our oss tech writer please do not re assign it to anyone else
| 0
|
110,529
| 23,951,667,965
|
IssuesEvent
|
2022-09-12 12:03:43
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Bots can't repair a wall near Typhon's stowage compartment
|
Bug Code Design
|
### Disclaimers
- [ ] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
If there's damage in the room "Stowage compartment" (By the whey it's the curve in the room) the AI can't fix it, they just try to run in place. if you try to fix it you just need to crouch. 
Where my mouse is at, is where the AI can't fix the damage.
(Sorry if this don't help :p)
### Reproduction steps
1. wait for damage in that room it needs to be in a certain spot.
2. Tell your AI to fix it.
3. and if they don't crouch (I don't know if they do crouch) they just should
try to fix it but just fail and walk in place
### Bug prevalence
Just once
### Version
0.18.15.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
1.0
|
Bots can't repair a wall near Typhon's stowage compartment - ### Disclaimers
- [ ] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
If there's damage in the room "Stowage compartment" (By the whey it's the curve in the room) the AI can't fix it, they just try to run in place. if you try to fix it you just need to crouch. 
Where my mouse is at, is where the AI can't fix the damage.
(Sorry if this don't help :p)
### Reproduction steps
1. wait for damage in that room it needs to be in a certain spot.
2. Tell your AI to fix it.
3. and if they don't crouch (I don't know if they do crouch) they just should
try to fix it but just fail and walk in place
### Bug prevalence
Just once
### Version
0.18.15.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
non_process
|
bots can t repair a wall near typhon s stowage compartment disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened if there s damage in the room stowage compartment by the whey it s the curve in the room the ai can t fix it they just try to run in place if you try to fix it you just need to crouch where my mouse is at is where the ai can t fix the damage sorry if this don t help p reproduction steps wait for damage in that room it needs to be in a certain spot tell your ai to fix it and if they don t crouch i don t know if they do crouch they just should try to fix it but just fail and walk in place bug prevalence just once version no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response
| 0
|
140,960
| 12,952,643,204
|
IssuesEvent
|
2020-07-19 21:20:03
|
chessdb/api
|
https://api.github.com/repos/chessdb/api
|
closed
|
Fix Readme
|
documentation good first issue
|
We could use some of the stuff from the discord server:
**Chess DB API**
https://github.com/chessdb/api
*Everything you need to get it up and running should be available in the `README.md`*
**Docker**
```
$ git clone git@github.com:chessdb/api.git
$ cd api
$ cp .example.env .env
$ docker-compose up -d
```
*For setups without docker*
**GNU/Linux **
```
$ yay -S postgresql pgcli
$ pgcli -U postgres
# \PASSWORD // set your password to 'postgres'
# CREATE DATABASE chessdb_api;
# \q
$ git clone git@github.com:chessdb/api.git
$ cd api
$ cp .example.env .env
$ make run
```
|
1.0
|
Fix Readme - We could use some of the stuff from the discord server:
**Chess DB API**
https://github.com/chessdb/api
*Everything you need to get it up and running should be available in the `README.md`*
**Docker**
```
$ git clone git@github.com:chessdb/api.git
$ cd api
$ cp .example.env .env
$ docker-compose up -d
```
*For setups without docker*
**GNU/Linux **
```
$ yay -S postgresql pgcli
$ pgcli -U postgres
# \PASSWORD // set your password to 'postgres'
# CREATE DATABASE chessdb_api;
# \q
$ git clone git@github.com:chessdb/api.git
$ cd api
$ cp .example.env .env
$ make run
```
|
non_process
|
fix readme we could use some of the stuff from the discord server chess db api everything you need to get it up and running should be available in the readme md docker git clone git github com chessdb api git cd api cp example env env docker compose up d for setups without docker gnu linux yay s postgresql pgcli pgcli u postgres password set your password to postgres create database chessdb api q git clone git github com chessdb api git cd api cp example env env make run
| 0
|
616,184
| 19,295,760,115
|
IssuesEvent
|
2021-12-12 15:06:51
|
Zettlr/Zettlr
|
https://api.github.com/repos/Zettlr/Zettlr
|
closed
|
Update to Vue 3 and use composition interface
|
pinned priority:low develop
|
For (better) typescript support, I would propose to upgrade to Vue 3 and use the new [Composition Interface](https://v3.vuejs.org/api/composition-api.html). (There is also a npm package to use the composition interface with vue 2 if upgrading is not possible for some reason.)
|
1.0
|
Update to Vue 3 and use composition interface - For (better) typescript support, I would propose to upgrade to Vue 3 and use the new [Composition Interface](https://v3.vuejs.org/api/composition-api.html). (There is also a npm package to use the composition interface with vue 2 if upgrading is not possible for some reason.)
|
non_process
|
update to vue and use composition interface for better typescript support i would propose to upgrade to vue and use the new there is also a npm package to use the composition interface with vue if upgrading is not possible for some reason
| 0
|
16,140
| 9,276,428,367
|
IssuesEvent
|
2019-03-20 02:53:18
|
tripal/tripal
|
https://api.github.com/repos/tripal/tripal
|
closed
|
Bulk loader to featureloc can be very slow
|
performance tripal-7.x-3.x
|
Copying from Drupal issue queue: https://www.drupal.org/node/1816142
When loading records to the featureloc table the loading can be extremely slow. This is the case when the srcfeature_id is for a large chromosome already exists, is being selected for by the loader and has the 'residues' field fully populated with DNA sequence. It takes a while to retrieve and transfer that large of a record so it slows the loader to a crawl. In, the case of a select, perhaps we should only return the field requested and the PK and FK ids for later "referral" joins. The bulk loader already supports joining of tables on non-FK fields so, in this event the non-FK field is automatically added when needed so limiting the fields returned to just the requested field, the PK and FK should be fine (I believe).
|
True
|
Bulk loader to featureloc can be very slow - Copying from Drupal issue queue: https://www.drupal.org/node/1816142
When loading records to the featureloc table the loading can be extremely slow. This is the case when the srcfeature_id is for a large chromosome already exists, is being selected for by the loader and has the 'residues' field fully populated with DNA sequence. It takes a while to retrieve and transfer that large of a record so it slows the loader to a crawl. In, the case of a select, perhaps we should only return the field requested and the PK and FK ids for later "referral" joins. The bulk loader already supports joining of tables on non-FK fields so, in this event the non-FK field is automatically added when needed so limiting the fields returned to just the requested field, the PK and FK should be fine (I believe).
|
non_process
|
bulk loader to featureloc can be very slow copying from drupal issue queue when loading records to the featureloc table the loading can be extremely slow this is the case when the srcfeature id is for a large chromosome already exists is being selected for by the loader and has the residues field fully populated with dna sequence it takes a while to retrieve and transfer that large of a record so it slows the loader to a crawl in the case of a select perhaps we should only return the field requested and the pk and fk ids for later referral joins the bulk loader already supports joining of tables on non fk fields so in this event the non fk field is automatically added when needed so limiting the fields returned to just the requested field the pk and fk should be fine i believe
| 0
|
20,132
| 26,673,428,020
|
IssuesEvent
|
2023-01-26 12:23:08
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
closed
|
RLM w12 - generic http request
|
NOT YET PROCESSED
|
Hi . I have a good old Barco RLM-W12.
and would like to control power+shutter from companion.
I found the #2 and also info in the #187 , but still fairly stuck.
The command: http://projectorip/tgi/remote.tgi?onky and offky work fine from a browser, but I'm getting nowhere from within companion.
I'm supposed to connect it with generic http request? using POST-command? txt-plain? Tried them all to no effect so far.
Any help appreciated here..
Would you also know the commands for picture mute?
the command I used previously via serial is op picture.mute = 1 , is there another syntax for http?
thanks.
|
1.0
|
RLM w12 - generic http request - Hi . I have a good old Barco RLM-W12.
and would like to control power+shutter from companion.
I found the #2 and also info in the #187 , but still fairly stuck.
The command: http://projectorip/tgi/remote.tgi?onky and offky work fine from a browser, but I'm getting nowhere from within companion.
I'm supposed to connect it with generic http request? using POST-command? txt-plain? Tried them all to no effect so far.
Any help appreciated here..
Would you also know the commands for picture mute?
the command I used previously via serial is op picture.mute = 1 , is there another syntax for http?
thanks.
|
process
|
rlm generic http request hi i have a good old barco rlm and would like to control power shutter from companion i found the and also info in the but still fairly stuck the command and offky work fine from a browser but i m getting nowhere from within companion i m supposed to connect it with generic http request using post command txt plain tried them all to no effect so far any help appreciated here would you also know the commands for picture mute the command i used previously via serial is op picture mute is there another syntax for http thanks
| 1
|
9,980
| 13,022,976,123
|
IssuesEvent
|
2020-07-27 09:13:30
|
keep-network/keep-ecdsa
|
https://api.github.com/repos/keep-network/keep-ecdsa
|
closed
|
Bonded sortition pool re-joining after running out of Eth
|
process & client team 🐛 bug 📟 client
|
Over the past couple of days I've observed that once a node gets kicked from the bonded sortition pool, because it ran out of Eth for bonding, it has a hard time rejoining. Calls to `RegisterAsMemberCandidate` in `pkg/chain/ethereum/ethereum.go` are queued, I think, but it seems like they're never executed even if more Eth are added for bonding.
1. Run out of Eth for bonding
2. Operator gets kicked from bonded sortition pool
3. Node logs `Operator not eligible` errors
4. Add more Eth for bonding
5. Node continues to log errors
A restart of the node remedies this, which is not perfect.
I was also able to fix it by just calling `registerMemberCandidate` on chain via an external script. This returns the node to normal operation but also requires outside intervention
|
1.0
|
Bonded sortition pool re-joining after running out of Eth - Over the past couple of days I've observed that once a node gets kicked from the bonded sortition pool, because it ran out of Eth for bonding, it has a hard time rejoining. Calls to `RegisterAsMemberCandidate` in `pkg/chain/ethereum/ethereum.go` are queued, I think, but it seems like they're never executed even if more Eth are added for bonding.
1. Run out of Eth for bonding
2. Operator gets kicked from bonded sortition pool
3. Node logs `Operator not eligible` errors
4. Add more Eth for bonding
5. Node continues to log errors
A restart of the node remedies this, which is not perfect.
I was also able to fix it by just calling `registerMemberCandidate` on chain via an external script. This returns the node to normal operation but also requires outside intervention
|
process
|
bonded sortition pool re joining after running out of eth over the past couple of days i ve observed that once a node gets kicked from the bonded sortition pool because it ran out of eth for bonding it has a hard time rejoining calls to registerasmembercandidate in pkg chain ethereum ethereum go are queued i think but it seems like they re never executed even if more eth are added for bonding run out of eth for bonding operator gets kicked from bonded sortition pool node logs operator not eligible errors add more eth for bonding node continues to log errors a restart of the node remedies this which is not perfect i was also able to fix it by just calling registermembercandidate on chain via an external script this returns the node to normal operation but also requires outside intervention
| 1
|
17,284
| 23,093,112,056
|
IssuesEvent
|
2022-07-26 16:50:15
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
chore(repo): revert mergeCommitAllowed sync-repo-setting
|
type: process
|
Once the work with `storage-refactor` branch is complete and the branch is merged into `main`, we must again disable `mergeCommitAllowed` and remove the branch settings for `storage-refactor`. These changes were introduced in https://github.com/googleapis/google-cloud-go/pull/6357.
|
1.0
|
chore(repo): revert mergeCommitAllowed sync-repo-setting - Once the work with `storage-refactor` branch is complete and the branch is merged into `main`, we must again disable `mergeCommitAllowed` and remove the branch settings for `storage-refactor`. These changes were introduced in https://github.com/googleapis/google-cloud-go/pull/6357.
|
process
|
chore repo revert mergecommitallowed sync repo setting once the work with storage refactor branch is complete and the branch is merged into main we must again disable mergecommitallowed and remove the branch settings for storage refactor these changes were introduced in
| 1
|
7,987
| 11,182,898,803
|
IssuesEvent
|
2019-12-31 10:41:43
|
arunkumar9t2/scabbard
|
https://api.github.com/repos/arunkumar9t2/scabbard
|
closed
|
Support incremental annotation processing
|
module:processor question
|
I remember seeing something somewhere that scabbard disables it (though didn't see anything obvious in code about this). Would be good to have the reasons documented and filing an issue for tracking it. Feel free to close if it's a technical limitation, but would be good to add this in the FAQ
|
1.0
|
Support incremental annotation processing - I remember seeing something somewhere that scabbard disables it (though didn't see anything obvious in code about this). Would be good to have the reasons documented and filing an issue for tracking it. Feel free to close if it's a technical limitation, but would be good to add this in the FAQ
|
process
|
support incremental annotation processing i remember seeing something somewhere that scabbard disables it though didn t see anything obvious in code about this would be good to have the reasons documented and filing an issue for tracking it feel free to close if it s a technical limitation but would be good to add this in the faq
| 1
|
2,632
| 5,411,355,474
|
IssuesEvent
|
2017-03-01 11:24:59
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
preprocessor: static params derivatives don't use temporary terms
|
bug preprocessor
|
see `kim2_static_params_derivs.m` produced by `identification/kim/kim2.mod`
|
1.0
|
preprocessor: static params derivatives don't use temporary terms - see `kim2_static_params_derivs.m` produced by `identification/kim/kim2.mod`
|
process
|
preprocessor static params derivatives don t use temporary terms see static params derivs m produced by identification kim mod
| 1
|
46,998
| 7,298,726,545
|
IssuesEvent
|
2018-02-26 17:49:51
|
gravitational/teleport
|
https://api.github.com/repos/gravitational/teleport
|
closed
|
Add docs on backwards and forwards compatibility
|
documentation
|
## Description
We need to nicely explain this to users, so leaving up to @kontsevoy. Here is what we guarantee and test:
### Versioning
Teleport uses and follows sem ver notation, and follows the convention:
https://semver.org/
### Compatibility
**Major versions**
Major versions of teleport components are never compatible, which means that 4.0.0 is not compatible with 5.0.0. There may be exceptions to this rule, but we will explicitly state it for individual verisons.
**Upgrades within the cluster**
To understand backwards/ and forwards compatiblity, one needs to understand how teleport is expected to be updated:
* First auth servers have to be updated, because they serve API requests and run migrations. This also means that during upgrades auth servers have to be scaled down to 1 so migrations will not confuse other auth servers running requests.
* Then proxies have to be upgraded (old proxies will work with newer auth servers, but not vice-versa)
* Then nodes will have to be upgraded (old nodes will work with newer auth servers and proxies, but not vice versa)
* Then tsh clients have to be upgraded (old tsh clients will work with newer servers, but not vice versa)
**Upgrades in trusted clusters**
Pretty much the same logic applies to the trusted clusters:
* First main cluster has to be upgraded to the minor version (usually that's also easier to do because org usually controls the main cluster, but not always remote clusters)
* Then all trusted clusters can be upgraded
**Compatibility**
Minor versions are backwards compatible up to one version. Patch versions are interchangeable - no api changes are introduced in patches. Gravitational team makes sure this holds true. No other guarantees are provided. It does not mean that necessarily things will go terribly wrong - it just means that gravitational team does not test for these use cases (as there will be too many combinations and it will be too expensive)
**In Cluster: OK**
| Auth | Proxy | Node | Tsh | Description|
| ------------- |-------------| -----|----|------|
| 2.5.0 | 2.4.0 | 2.4.0 | 2.4.0 |Auth server is always newer version|
| 2.5.0 | 2.5.0 | 2.4.0 | 2.4.0 | |
| 2.5.0 | 2.5.0 | 2.5.0 | 2.4.0 | |
| 2.5.3 | 2.5.4 | 2.5.8 | 2.5.1 |Patches of the same version are interchangeable|
**In Cluster: NOT OK**
Pretty much anything else than described above, but here are some common examples:
| Auth | Proxy | Node | Tsh | Description|
| ------------- |-------------| -----|----|------|
| 2.4.0 | 2.5.0 | 2.4.0 | 2.4.0 |Proxy should not be newer version|
| 2.4.0 | 2.5.0 | 2.4.0 | 2.5.0 | Tsh should not be newer version than old auth server|
| 2.5.0 | 2.3.0 | 2.3.0 | 2.3.0 | More than one minor version diff is not ok|
**Trusted clusters: OK**
| Main Cluster | Remote Cluster | Description|
| ------------- |-------------| -----|
| 2.5.0 | 2.4.0 |Main cluster is newer|
| 2.5.5 | 2.5.6 |Patch versions are interchangeable|
**Trusted clusters: OK**
| Main Cluster | Remote Cluster | Description|
| ------------- |-------------| -----|
| 2.4.0 | 2.5.0 |Remote cluster can not be newer|
| 2.4.0 | 2.3.0 |More than one version diff|
|
1.0
|
Add docs on backwards and forwards compatibility - ## Description
We need to nicely explain this to users, so leaving up to @kontsevoy. Here is what we guarantee and test:
### Versioning
Teleport uses and follows sem ver notation, and follows the convention:
https://semver.org/
### Compatibility
**Major versions**
Major versions of teleport components are never compatible, which means that 4.0.0 is not compatible with 5.0.0. There may be exceptions to this rule, but we will explicitly state it for individual verisons.
**Upgrades within the cluster**
To understand backwards/ and forwards compatiblity, one needs to understand how teleport is expected to be updated:
* First auth servers have to be updated, because they serve API requests and run migrations. This also means that during upgrades auth servers have to be scaled down to 1 so migrations will not confuse other auth servers running requests.
* Then proxies have to be upgraded (old proxies will work with newer auth servers, but not vice-versa)
* Then nodes will have to be upgraded (old nodes will work with newer auth servers and proxies, but not vice versa)
* Then tsh clients have to be upgraded (old tsh clients will work with newer servers, but not vice versa)
**Upgrades in trusted clusters**
Pretty much the same logic applies to the trusted clusters:
* First main cluster has to be upgraded to the minor version (usually that's also easier to do because org usually controls the main cluster, but not always remote clusters)
* Then all trusted clusters can be upgraded
**Compatibility**
Minor versions are backwards compatible up to one version. Patch versions are interchangeable - no api changes are introduced in patches. Gravitational team makes sure this holds true. No other guarantees are provided. It does not mean that necessarily things will go terribly wrong - it just means that gravitational team does not test for these use cases (as there will be too many combinations and it will be too expensive)
**In Cluster: OK**
| Auth | Proxy | Node | Tsh | Description|
| ------------- |-------------| -----|----|------|
| 2.5.0 | 2.4.0 | 2.4.0 | 2.4.0 |Auth server is always newer version|
| 2.5.0 | 2.5.0 | 2.4.0 | 2.4.0 | |
| 2.5.0 | 2.5.0 | 2.5.0 | 2.4.0 | |
| 2.5.3 | 2.5.4 | 2.5.8 | 2.5.1 |Patches of the same version are interchangeable|
**In Cluster: NOT OK**
Pretty much anything else than described above, but here are some common examples:
| Auth | Proxy | Node | Tsh | Description|
| ------------- |-------------| -----|----|------|
| 2.4.0 | 2.5.0 | 2.4.0 | 2.4.0 |Proxy should not be newer version|
| 2.4.0 | 2.5.0 | 2.4.0 | 2.5.0 | Tsh should not be newer version than old auth server|
| 2.5.0 | 2.3.0 | 2.3.0 | 2.3.0 | More than one minor version diff is not ok|
**Trusted clusters: OK**
| Main Cluster | Remote Cluster | Description|
| ------------- |-------------| -----|
| 2.5.0 | 2.4.0 |Main cluster is newer|
| 2.5.5 | 2.5.6 |Patch versions are interchangeable|
**Trusted clusters: OK**
| Main Cluster | Remote Cluster | Description|
| ------------- |-------------| -----|
| 2.4.0 | 2.5.0 |Remote cluster can not be newer|
| 2.4.0 | 2.3.0 |More than one version diff|
|
non_process
|
add docs on backwards and forwards compatibility description we need to nicely explain this to users so leaving up to kontsevoy here is what we guarantee and test versioning teleport uses and follows sem ver notation and follows the convention compatibility major versions major versions of teleport components are never compatible which means that is not compatible with there may be exceptions to this rule but we will explicitly state it for individual verisons upgrades within the cluster to understand backwards and forwards compatiblity one needs to understand how teleport is expected to be updated first auth servers have to be updated because they serve api requests and run migrations this also means that during upgrades auth servers have to be scaled down to so migrations will not confuse other auth servers running requests then proxies have to be upgraded old proxies will work with newer auth servers but not vice versa then nodes will have to be upgraded old nodes will work with newer auth servers and proxies but not vice versa then tsh clients have to be upgraded old tsh clients will work with newer servers but not vice versa upgrades in trusted clusters pretty much the same logic applies to the trusted clusters first main cluster has to be upgraded to the minor version usually that s also easier to do because org usually controls the main cluster but not always remote clusters then all trusted clusters can be upgraded compatibility minor versions are backwards compatible up to one version patch versions are interchangeable no api changes are introduced in patches gravitational team makes sure this holds true no other guarantees are provided it does not mean that necessarily things will go terribly wrong it just means that gravitational team does not test for these use cases as there will be too many combinations and it will be too expensive in cluster ok auth proxy node tsh description auth server is always newer version patches of the same version are interchangeable in cluster not ok pretty much anything else than described above but here are some common examples auth proxy node tsh description proxy should not be newer version tsh should not be newer version than old auth server more than one minor version diff is not ok trusted clusters ok main cluster remote cluster description main cluster is newer patch versions are interchangeable trusted clusters ok main cluster remote cluster description remote cluster can not be newer more than one version diff
| 0
|
19,630
| 25,986,531,586
|
IssuesEvent
|
2022-12-20 01:09:37
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
Analista de Qualidade e Processos na [SOLUTIS]
|
SALVADOR CMMI Certificação ITIL PROCESSOS ITIL HELP WANTED Stale
|
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Engajamento, busca intensa por conhecimento, empatia e criatividade são a nossa receita para cultivar e colher, sempre, o melhor resultado.
- Buscamos profissional para atuar na área de Qualidade e Processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções.
## Atribuições
- Gestão e implantação de processos ITIL / ISO 20000 / ISO 27000;
- Gestão de processos ISO 9001:2015;
- Elaboração de planos de ação para eliminação de não conformidades;
- Revisão e melhoria de processos;
- Realização de treinamentos;
- Modelar e definir processos;
- Participar do processo de auditoria interna e externa;
## Local
- Salvador
## Requisitos
**Obrigatórios:**
- Experiência em processos ITIL / ISO 20000, ISO 27000 e ISO 9001:2015 elaboração, implantação e gestão;
- Experiência em auditorias internas e externas normas ISO 20000, 9001:2015;
- Experiência e /ou conhecimento em implantação, auditoria e melhoria de processos de governança de serviços como: Gerenciamento de Mudanças, Riscos, Problemas, Capacidade, Disponibilidade, Configuração etc;
- Conhecimento e ou experiência em CMMI;
- Superior completo;
- Formação Auditor Interno ISO 9001:2015;
**Diferenciais certificações em :**
- Certificação ITIL V3 ou superior;
- Certificação ITSM ISO/IEC 20000;
- HDI - Support Center Team Lead (SCTL)
- HDI - Support Center Manager (SCM)
- HDI - Knowledge-Centered Support (KCS)
- HDI - Support Center Director (SCD)
## Contratação
- a combinar
## Nossa empresa
- Somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho. Ambiente descontraído e criativo, horário flexível, possibilidade de trabalho home office, eventos e programas internos com games... Isso faz parte do nosso dia-a-dia.
## Como se candidatar
- [Clique aqui para se candidatar](https://solutis.gupy.io/jobs/135584?jobBoardSource=gupy_public_page)
|
1.0
|
Analista de Qualidade e Processos na [SOLUTIS] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Engajamento, busca intensa por conhecimento, empatia e criatividade são a nossa receita para cultivar e colher, sempre, o melhor resultado.
- Buscamos profissional para atuar na área de Qualidade e Processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções.
## Atribuições
- Gestão e implantação de processos ITIL / ISO 20000 / ISO 27000;
- Gestão de processos ISO 9001:2015;
- Elaboração de planos de ação para eliminação de não conformidades;
- Revisão e melhoria de processos;
- Realização de treinamentos;
- Modelar e definir processos;
- Participar do processo de auditoria interna e externa;
## Local
- Salvador
## Requisitos
**Obrigatórios:**
- Experiência em processos ITIL / ISO 20000, ISO 27000 e ISO 9001:2015 elaboração, implantação e gestão;
- Experiência em auditorias internas e externas normas ISO 20000, 9001:2015;
- Experiência e /ou conhecimento em implantação, auditoria e melhoria de processos de governança de serviços como: Gerenciamento de Mudanças, Riscos, Problemas, Capacidade, Disponibilidade, Configuração etc;
- Conhecimento e ou experiência em CMMI;
- Superior completo;
- Formação Auditor Interno ISO 9001:2015;
**Diferenciais certificações em :**
- Certificação ITIL V3 ou superior;
- Certificação ITSM ISO/IEC 20000;
- HDI - Support Center Team Lead (SCTL)
- HDI - Support Center Manager (SCM)
- HDI - Knowledge-Centered Support (KCS)
- HDI - Support Center Director (SCD)
## Contratação
- a combinar
## Nossa empresa
- Somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho. Ambiente descontraído e criativo, horário flexível, possibilidade de trabalho home office, eventos e programas internos com games... Isso faz parte do nosso dia-a-dia.
## Como se candidatar
- [Clique aqui para se candidatar](https://solutis.gupy.io/jobs/135584?jobBoardSource=gupy_public_page)
|
process
|
analista de qualidade e processos na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga engajamento busca intensa por conhecimento empatia e criatividade são a nossa receita para cultivar e colher sempre o melhor resultado buscamos profissional para atuar na área de qualidade e processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções atribuições gestão e implantação de processos itil iso iso gestão de processos iso elaboração de planos de ação para eliminação de não conformidades revisão e melhoria de processos realização de treinamentos modelar e definir processos participar do processo de auditoria interna e externa local salvador requisitos obrigatórios experiência em processos itil iso iso e iso elaboração implantação e gestão experiência em auditorias internas e externas normas iso experiência e ou conhecimento em implantação auditoria e melhoria de processos de governança de serviços como gerenciamento de mudanças riscos problemas capacidade disponibilidade configuração etc conhecimento e ou experiência em cmmi superior completo formação auditor interno iso diferenciais certificações em certificação itil ou superior certificação itsm iso iec hdi support center team lead sctl hdi support center manager scm hdi knowledge centered support kcs hdi support center director scd contratação a combinar nossa empresa somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho ambiente descontraído e criativo horário flexível possibilidade de trabalho home office eventos e programas internos com games isso faz parte do nosso dia a dia como se candidatar
| 1
|
320,408
| 23,810,153,305
|
IssuesEvent
|
2022-09-04 16:36:25
|
MEGA65/mega65-core
|
https://api.github.com/repos/MEGA65/mega65-core
|
closed
|
VIC-IV FNRSTCMP register description is wrong
|
documentation
|
**Describe where we can find the problematic topic**
This is a documentation only issue, but since the code to generate the manual tables is placed in mega65-core repo, we need to fix this here.
Section `VIC-IV / MEGA65 SPECIFIC REGISTERS` says `FNRSTCMP Raster compare is in physical rasters if set, or VIC-II raster if clear`. This is wrong, the bit is working the other way round.
**Describe the solution you'd like**
Fix the description, and also add some clarification that
- `FNRST` is a read-only copy of `FNRSTCMP`
- `RASCMP` is only used if physical rasters are enabled via `FNRSTCMP`
|
1.0
|
VIC-IV FNRSTCMP register description is wrong - **Describe where we can find the problematic topic**
This is a documentation only issue, but since the code to generate the manual tables is placed in mega65-core repo, we need to fix this here.
Section `VIC-IV / MEGA65 SPECIFIC REGISTERS` says `FNRSTCMP Raster compare is in physical rasters if set, or VIC-II raster if clear`. This is wrong, the bit is working the other way round.
**Describe the solution you'd like**
Fix the description, and also add some clarification that
- `FNRST` is a read-only copy of `FNRSTCMP`
- `RASCMP` is only used if physical rasters are enabled via `FNRSTCMP`
|
non_process
|
vic iv fnrstcmp register description is wrong describe where we can find the problematic topic this is a documentation only issue but since the code to generate the manual tables is placed in core repo we need to fix this here section vic iv specific registers says fnrstcmp raster compare is in physical rasters if set or vic ii raster if clear this is wrong the bit is working the other way round describe the solution you d like fix the description and also add some clarification that fnrst is a read only copy of fnrstcmp rascmp is only used if physical rasters are enabled via fnrstcmp
| 0
|
19,915
| 26,378,605,262
|
IssuesEvent
|
2023-01-12 06:16:18
|
taikoxyz/taiko-mono
|
https://api.github.com/repos/taikoxyz/taiko-mono
|
closed
|
fix(bridge-ui): bridge error, transaction still pending
|
bridge feedback-processed
|
### Describe the bug
My bridge from taiko to A1 ethereum is still pending from yesterday, but my eth its already in my A1 eth I don't know how to explain but yesterday im try to bridge 5 eth from taiko to A1 eth and that's it's happen you can see in my pic



https://l2explorer.a1.taiko.xyz/tx/0xfceed506f97a3636487710e277905be83f15bf3908bda972280d8186c91ef4da
### Steps to reproduce
Steps to reproduce here.
### Additional context
Additional context here.
|
1.0
|
fix(bridge-ui): bridge error, transaction still pending - ### Describe the bug
My bridge from taiko to A1 ethereum is still pending from yesterday, but my eth its already in my A1 eth I don't know how to explain but yesterday im try to bridge 5 eth from taiko to A1 eth and that's it's happen you can see in my pic



https://l2explorer.a1.taiko.xyz/tx/0xfceed506f97a3636487710e277905be83f15bf3908bda972280d8186c91ef4da
### Steps to reproduce
Steps to reproduce here.
### Additional context
Additional context here.
|
process
|
fix bridge ui bridge error transaction still pending describe the bug my bridge from taiko to ethereum is still pending from yesterday but my eth its already in my eth i don t know how to explain but yesterday im try to bridge eth from taiko to eth and that s it s happen you can see in my pic steps to reproduce steps to reproduce here additional context additional context here
| 1
|
21,931
| 30,446,559,422
|
IssuesEvent
|
2023-07-15 18:48:32
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pyutils 0.0.1b17 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b17",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp15hukcr_/pyutils"
}
}```
|
1.0
|
pyutils 0.0.1b17 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b17",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp15hukcr_/pyutils"
}
}```
|
process
|
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pytils python utils silent process execution location pyutils exec utils py pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp pyutils
| 1
|
186,161
| 21,920,061,112
|
IssuesEvent
|
2022-05-22 12:38:49
|
turkdevops/sourcegraph
|
https://api.github.com/repos/turkdevops/sourcegraph
|
closed
|
CVE-2021-23341 (High) detected in prismjs-1.16.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.16.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.16.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.16.0.tgz</a></p>
<p>
Dependency Hierarchy:
- components-5.3.18.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.16.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/sourcegraph/commit/5a4a7def9ddff6354e22069c494feb0f30196e36">5a4a7def9ddff6354e22069c494feb0f30196e36</a></p>
<p>Found in base branch: <b>dev/seed-tool</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution (prismjs): 1.23.0</p>
<p>Direct dependency fix Resolution (@storybook/components): 6.4.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23341 (High) detected in prismjs-1.16.0.tgz - autoclosed - ## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.16.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.16.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.16.0.tgz</a></p>
<p>
Dependency Hierarchy:
- components-5.3.18.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.16.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/sourcegraph/commit/5a4a7def9ddff6354e22069c494feb0f30196e36">5a4a7def9ddff6354e22069c494feb0f30196e36</a></p>
<p>Found in base branch: <b>dev/seed-tool</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution (prismjs): 1.23.0</p>
<p>Direct dependency fix Resolution (@storybook/components): 6.4.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in prismjs tgz autoclosed cve high severity vulnerability vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href dependency hierarchy components tgz root library react syntax highlighter tgz x prismjs tgz vulnerable library found in head commit a href found in base branch dev seed tool vulnerability details the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution prismjs direct dependency fix resolution storybook components step up your open source security game with whitesource
| 0
|
90,935
| 11,452,992,090
|
IssuesEvent
|
2020-02-06 14:40:06
|
creativecommons/vocabulary
|
https://api.github.com/repos/creativecommons/vocabulary
|
closed
|
Move Cards from Figma LT to Figma DL
|
aspect:design
|
**Description**
We need to move the Cards page to the Figma Design Library. Card names have been discussed and approved.
|
1.0
|
Move Cards from Figma LT to Figma DL - **Description**
We need to move the Cards page to the Figma Design Library. Card names have been discussed and approved.
|
non_process
|
move cards from figma lt to figma dl description we need to move the cards page to the figma design library card names have been discussed and approved
| 0
|
305,594
| 23,122,233,864
|
IssuesEvent
|
2022-07-27 23:12:53
|
Tenpi/Moebooru.moe
|
https://api.github.com/repos/Tenpi/Moebooru.moe
|
closed
|
Installation/Running Directions Missing from Readme
|
documentation
|
It looks like the readme has no information about how to host this site, some documentation would be tremendously useful!
|
1.0
|
Installation/Running Directions Missing from Readme - It looks like the readme has no information about how to host this site, some documentation would be tremendously useful!
|
non_process
|
installation running directions missing from readme it looks like the readme has no information about how to host this site some documentation would be tremendously useful
| 0
|
13,660
| 16,375,983,512
|
IssuesEvent
|
2021-05-16 04:59:13
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Request in QGIS (Add undo/redo support to model designer)
|
3.14 Graphical modeler Processing
|
### Request for documentation
From pull request QGIS/qgis#34938
Author: @nyalldawson
QGIS version: 3.14
**Add undo/redo support to model designer**
### PR Description:
Makes QGIS more forgiving for users!
Refs NRCan Contract#3000707093
Works just like you'd expect:

Note that we save the whole model definition in the undo stack, not just the affect component changes. This was a lesson I learnt after the composer redesign -- it's just safest to save and restore the WHOLE document then try to deal with interconnected changes.
### Commits tagged with [need-docs] or [FEATURE]
"[FEATURE][processing] Add undo/redo support to model designer\n\nMakes QGIS more forgiving for users!\n\nSponsored by NRCan"
|
1.0
|
Request in QGIS (Add undo/redo support to model designer) - ### Request for documentation
From pull request QGIS/qgis#34938
Author: @nyalldawson
QGIS version: 3.14
**Add undo/redo support to model designer**
### PR Description:
Makes QGIS more forgiving for users!
Refs NRCan Contract#3000707093
Works just like you'd expect:

Note that we save the whole model definition in the undo stack, not just the affect component changes. This was a lesson I learnt after the composer redesign -- it's just safest to save and restore the WHOLE document then try to deal with interconnected changes.
### Commits tagged with [need-docs] or [FEATURE]
"[FEATURE][processing] Add undo/redo support to model designer\n\nMakes QGIS more forgiving for users!\n\nSponsored by NRCan"
|
process
|
request in qgis add undo redo support to model designer request for documentation from pull request qgis qgis author nyalldawson qgis version add undo redo support to model designer pr description makes qgis more forgiving for users refs nrcan contract works just like you d expect note that we save the whole model definition in the undo stack not just the affect component changes this was a lesson i learnt after the composer redesign it s just safest to save and restore the whole document then try to deal with interconnected changes commits tagged with or add undo redo support to model designer n nmakes qgis more forgiving for users n nsponsored by nrcan
| 1
|
26,045
| 4,559,469,637
|
IssuesEvent
|
2016-09-14 02:26:22
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
postLink blackholed after update to 2.8.7
|
Defect
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [x] feature-discussion (RFC)
* CakePHP Version: 2.8.7.
* Platform and Target: MAMP, PHP 5.6, MYSQL.
### What you did
After update to 2.8.7, postLinks are blackholed.
Tested in a clean installation with Security component, cake bake, delete postLink is blackholed.
Reverting this commit 5253f0b to 2.8.6 version, it works again.
|
1.0
|
postLink blackholed after update to 2.8.7 - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [x] feature-discussion (RFC)
* CakePHP Version: 2.8.7.
* Platform and Target: MAMP, PHP 5.6, MYSQL.
### What you did
After update to 2.8.7, postLinks are blackholed.
Tested in a clean installation with Security component, cake bake, delete postLink is blackholed.
Reverting this commit 5253f0b to 2.8.6 version, it works again.
|
non_process
|
postlink blackholed after update to this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target mamp php mysql what you did after update to postlinks are blackholed tested in a clean installation with security component cake bake delete postlink is blackholed reverting this commit to version it works again
| 0
|
155,340
| 19,785,114,301
|
IssuesEvent
|
2022-01-18 05:15:32
|
bithyve/hexa
|
https://api.github.com/repos/bithyve/hexa
|
closed
|
App generated password:Unable to reconfirm the encryption password.
|
fixed Wallet Security 2.0.68
|

- If I chose App generated password at the time of wallet creation and I Go to confirm that password on Level 1.I put incorrect password it shows me correct password after entering the passcode and when I enter that password instaed of pasting it,it shows me the error with 'Try again' button.
- It is giving me the same error even if I copy paste the same app generated password to confirm the health check pwd.So unable to reconfirm the encryption password.

|
True
|
App generated password:Unable to reconfirm the encryption password. - 
- If I chose App generated password at the time of wallet creation and I Go to confirm that password on Level 1.I put incorrect password it shows me correct password after entering the passcode and when I enter that password instaed of pasting it,it shows me the error with 'Try again' button.
- It is giving me the same error even if I copy paste the same app generated password to confirm the health check pwd.So unable to reconfirm the encryption password.

|
non_process
|
app generated password unable to reconfirm the encryption password if i chose app generated password at the time of wallet creation and i go to confirm that password on level i put incorrect password it shows me correct password after entering the passcode and when i enter that password instaed of pasting it it shows me the error with try again button it is giving me the same error even if i copy paste the same app generated password to confirm the health check pwd so unable to reconfirm the encryption password
| 0
|
459,731
| 13,198,020,177
|
IssuesEvent
|
2020-08-14 01:02:50
|
COMP3350-Group5/meal-buddy
|
https://api.github.com/repos/COMP3350-Group5/meal-buddy
|
closed
|
Acceptance Test View Stats and Analytics
|
High Priority Hours: 3
|
As a user, I would like to be able to see analytics for the food I've documented. (total calories, macronutrients, etc...)
|
1.0
|
Acceptance Test View Stats and Analytics - As a user, I would like to be able to see analytics for the food I've documented. (total calories, macronutrients, etc...)
|
non_process
|
acceptance test view stats and analytics as a user i would like to be able to see analytics for the food i ve documented total calories macronutrients etc
| 0
|
248,053
| 7,926,712,573
|
IssuesEvent
|
2018-07-06 03:58:46
|
ilmtest/search-engine
|
https://api.github.com/repos/ilmtest/search-engine
|
closed
|
If no page is selected and "Source" button is hit in Library then open book's profile page
|
feature fixed priority/low
|
Should be useful.
|
1.0
|
If no page is selected and "Source" button is hit in Library then open book's profile page - Should be useful.
|
non_process
|
if no page is selected and source button is hit in library then open book s profile page should be useful
| 0
|
18,904
| 24,843,572,340
|
IssuesEvent
|
2022-10-26 14:22:12
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Azure Automation Powershell 7 Runbook Fails Using Example Code for Webhooks
|
automation/svc triaged assigned-to-author doc-bug process-automation/subsvc Pri2
|
The code for the runbook which is listed in the documentation doesn't work properly on a Powershell 7 runbook. The webhook data is sent to the runbook in a format which cannot be converted to Json. The error you'll see is ```Conversion from JSON failed with error: Unexpected character encountered while parsing value```. This document should be updated to reflect the proper way to handle the Json however I'm unsure what the proper method is or if this is a product defect in Azure Automation.
It would appear the key/value pairs coming from Azure Automation are missing quotation marks (specifically the values) so the Json isn't valid.
```powershell
param
(
[Parameter(Mandatory=$false)]
[object] $WebhookData
)
write-output "start"
write-output ("object type: {0}" -f $WebhookData.gettype())
write-output $WebhookData
#write-warning (Test-Json -Json $WebhookData)
$Payload = $WebhookData | ConvertFrom-Json
write-output "`n`n"
write-output $Payload.WebhookName
write-output $Payload.RequestBody
write-output $Payload.RequestHeader
write-output "end"
if ($Payload.RequestBody) {
$names = (ConvertFrom-Json -InputObject $Payload.RequestBody)
foreach ($x in $names)
{
$name = $x.Name
Write-Output "Hello $name"
}
}
else {
Write-Output "Hello World!"
}
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7a6394c7-9bef-b8f8-ffd6-9d9d8e2daa07
* Version Independent ID: 5ffa20a2-436c-2726-dc57-9d3b49f9ca39
* Content: [Start an Azure Automation runbook from a webhook](https://docs.microsoft.com/en-us/azure/automation/automation-webhooks?tabs=portal)
* Content Source: [articles/automation/automation-webhooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-webhooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Azure Automation Powershell 7 Runbook Fails Using Example Code for Webhooks - The code for the runbook which is listed in the documentation doesn't work properly on a Powershell 7 runbook. The webhook data is sent to the runbook in a format which cannot be converted to Json. The error you'll see is ```Conversion from JSON failed with error: Unexpected character encountered while parsing value```. This document should be updated to reflect the proper way to handle the Json however I'm unsure what the proper method is or if this is a product defect in Azure Automation.
It would appear the key/value pairs coming from Azure Automation are missing quotation marks (specifically the values) so the Json isn't valid.
```powershell
param
(
[Parameter(Mandatory=$false)]
[object] $WebhookData
)
write-output "start"
write-output ("object type: {0}" -f $WebhookData.gettype())
write-output $WebhookData
#write-warning (Test-Json -Json $WebhookData)
$Payload = $WebhookData | ConvertFrom-Json
write-output "`n`n"
write-output $Payload.WebhookName
write-output $Payload.RequestBody
write-output $Payload.RequestHeader
write-output "end"
if ($Payload.RequestBody) {
$names = (ConvertFrom-Json -InputObject $Payload.RequestBody)
foreach ($x in $names)
{
$name = $x.Name
Write-Output "Hello $name"
}
}
else {
Write-Output "Hello World!"
}
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7a6394c7-9bef-b8f8-ffd6-9d9d8e2daa07
* Version Independent ID: 5ffa20a2-436c-2726-dc57-9d3b49f9ca39
* Content: [Start an Azure Automation runbook from a webhook](https://docs.microsoft.com/en-us/azure/automation/automation-webhooks?tabs=portal)
* Content Source: [articles/automation/automation-webhooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-webhooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
azure automation powershell runbook fails using example code for webhooks the code for the runbook which is listed in the documentation doesn t work properly on a powershell runbook the webhook data is sent to the runbook in a format which cannot be converted to json the error you ll see is conversion from json failed with error unexpected character encountered while parsing value this document should be updated to reflect the proper way to handle the json however i m unsure what the proper method is or if this is a product defect in azure automation it would appear the key value pairs coming from azure automation are missing quotation marks specifically the values so the json isn t valid powershell param webhookdata write output start write output object type f webhookdata gettype write output webhookdata write warning test json json webhookdata payload webhookdata convertfrom json write output n n write output payload webhookname write output payload requestbody write output payload requestheader write output end if payload requestbody names convertfrom json inputobject payload requestbody foreach x in names name x name write output hello name else write output hello world document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
21,055
| 28,003,526,024
|
IssuesEvent
|
2023-03-27 14:02:42
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
extra terminal processes hanging around on restart
|
bug upstream freeze-slow-crash-leak macos confirmed terminal-persistence terminal-process terminal-shell-integration
|
Testing #174837
Regardless of `window.experimental.sharedProcessUseUtilityProcess`, for every `N` terminals in a window, and `R` restarts, there are `N * R` processes shown in the activity monitor.
https://user-images.githubusercontent.com/29464607/220416103-e771334a-5df1-44d9-bbf4-630ce5413fa0.mov
|
1.0
|
extra terminal processes hanging around on restart - Testing #174837
Regardless of `window.experimental.sharedProcessUseUtilityProcess`, for every `N` terminals in a window, and `R` restarts, there are `N * R` processes shown in the activity monitor.
https://user-images.githubusercontent.com/29464607/220416103-e771334a-5df1-44d9-bbf4-630ce5413fa0.mov
|
process
|
extra terminal processes hanging around on restart testing regardless of window experimental sharedprocessuseutilityprocess for every n terminals in a window and r restarts there are n r processes shown in the activity monitor
| 1
|
431,552
| 30,239,619,249
|
IssuesEvent
|
2023-07-06 12:43:02
|
Spatial-Systems-Biology-Freiburg/FisInMa
|
https://api.github.com/repos/Spatial-Systems-Biology-Freiburg/FisInMa
|
closed
|
Find a good name for the package
|
documentation help wanted good first issue requirement
|
# Decide this package's name!
This issue tracks naming options for the developed package. To insert a option, simply leave a comment or write an Email to [jonas.pleyer@fdm.uni-freiburg.de](mailto:jonas.pleyer@fdm.uni-freiburg.de)
## Criteria
1) The name should be connected to either to the purpose of the package or the authors (wordplay totally acceptable).
2) Please explain the rationale of your choice in a few words when submitting your idea.
3) Feel free to be very creative.
4) The maximum number of suggestions per participant is limited to 3
## Prize
The team behind __your_package_name_here__ will communicate with the winner to find a fitting prize. Possible options are:
- Money (~20€)
- Help with personal projects
- Opportunity to give a talk in our Group
- We are open to other suggestions
## Decision Process
There is no formal process. The team will decide in an open conversation. This is by nature of the problem a subjective choice.
## Previous suggestions.
- FisInMa _(current working name)_
- FisIM
- Fishy
- ...
|
1.0
|
Find a good name for the package - # Decide this package's name!
This issue tracks naming options for the developed package. To insert a option, simply leave a comment or write an Email to [jonas.pleyer@fdm.uni-freiburg.de](mailto:jonas.pleyer@fdm.uni-freiburg.de)
## Criteria
1) The name should be connected to either to the purpose of the package or the authors (wordplay totally acceptable).
2) Please explain the rationale of your choice in a few words when submitting your idea.
3) Feel free to be very creative.
4) The maximum number of suggestions per participant is limited to 3
## Prize
The team behind __your_package_name_here__ will communicate with the winner to find a fitting prize. Possible options are:
- Money (~20€)
- Help with personal projects
- Opportunity to give a talk in our Group
- We are open to other suggestions
## Decision Process
There is no formal process. The team will decide in an open conversation. This is by nature of the problem a subjective choice.
## Previous suggestions.
- FisInMa _(current working name)_
- FisIM
- Fishy
- ...
|
non_process
|
find a good name for the package decide this package s name this issue tracks naming options for the developed package to insert a option simply leave a comment or write an email to mailto jonas pleyer fdm uni freiburg de criteria the name should be connected to either to the purpose of the package or the authors wordplay totally acceptable please explain the rationale of your choice in a few words when submitting your idea feel free to be very creative the maximum number of suggestions per participant is limited to prize the team behind your package name here will communicate with the winner to find a fitting prize possible options are money € help with personal projects opportunity to give a talk in our group we are open to other suggestions decision process there is no formal process the team will decide in an open conversation this is by nature of the problem a subjective choice previous suggestions fisinma current working name fisim fishy
| 0
|
554,716
| 16,436,652,342
|
IssuesEvent
|
2021-05-20 09:59:06
|
hochschule-darmstadt/openartbrowser
|
https://api.github.com/repos/hochschule-darmstadt/openartbrowser
|
closed
|
Timeline items not accessible
|
bug medium priority
|
**Describe the bug**
Access of the first items in a timeline is sometimes not possible.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://openartbrowser.org/en/movement/Q1277524
2. Try to use the timeline slider to access the first items in the timeline
3. See error
**Expected behavior**
The user should be able to navigate to all items in the timeline using each of the two options (buttons and slider).
**Additional context**
Maybe the minimal value of the slider is set to a wrong value?
|
1.0
|
Timeline items not accessible - **Describe the bug**
Access of the first items in a timeline is sometimes not possible.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://openartbrowser.org/en/movement/Q1277524
2. Try to use the timeline slider to access the first items in the timeline
3. See error
**Expected behavior**
The user should be able to navigate to all items in the timeline using each of the two options (buttons and slider).
**Additional context**
Maybe the minimal value of the slider is set to a wrong value?
|
non_process
|
timeline items not accessible describe the bug access of the first items in a timeline is sometimes not possible to reproduce steps to reproduce the behavior go to try to use the timeline slider to access the first items in the timeline see error expected behavior the user should be able to navigate to all items in the timeline using each of the two options buttons and slider additional context maybe the minimal value of the slider is set to a wrong value
| 0
|
7,169
| 10,312,556,334
|
IssuesEvent
|
2019-08-29 20:09:52
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Regression: UseShellExecute on non-Windows hangs parent process
|
area-System.Diagnostics.Process
|
This is a regression in .NET Core 3.0 from .NET Core 2.1 and only repros on Linux and macOS.
```powershell
invoke-item .
```
With PSCore6.2 based on .NET Core 2.1, this will open the OS File Explorer and you can still type commands in PowerShell.
With PS7 based on .NET Core 3.0 (Preview.6 or Preview.7 using nightly build of PS7), this will open the OS File Explorer, but PowerShell is no longer responding to input
|
1.0
|
Regression: UseShellExecute on non-Windows hangs parent process - This is a regression in .NET Core 3.0 from .NET Core 2.1 and only repros on Linux and macOS.
```powershell
invoke-item .
```
With PSCore6.2 based on .NET Core 2.1, this will open the OS File Explorer and you can still type commands in PowerShell.
With PS7 based on .NET Core 3.0 (Preview.6 or Preview.7 using nightly build of PS7), this will open the OS File Explorer, but PowerShell is no longer responding to input
|
process
|
regression useshellexecute on non windows hangs parent process this is a regression in net core from net core and only repros on linux and macos powershell invoke item with based on net core this will open the os file explorer and you can still type commands in powershell with based on net core preview or preview using nightly build of this will open the os file explorer but powershell is no longer responding to input
| 1
|
18,937
| 24,899,531,239
|
IssuesEvent
|
2022-10-28 19:16:19
|
Open-Data-Product-Initiative/open-data-product-spec-1.1dev
|
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec-1.1dev
|
opened
|
Data Product content sample
|
enhancement Unprocessed
|
Add element in which a sample of the data can be added. Customers often want to see the data and sample is often enough. Looking at the sample is different compared to looking at Data Product content Schema. Both can however be used in getting more familiar with the data product content. The latter is also suitable for validating the data stream.
|
1.0
|
Data Product content sample -
Add element in which a sample of the data can be added. Customers often want to see the data and sample is often enough. Looking at the sample is different compared to looking at Data Product content Schema. Both can however be used in getting more familiar with the data product content. The latter is also suitable for validating the data stream.
|
process
|
data product content sample add element in which a sample of the data can be added customers often want to see the data and sample is often enough looking at the sample is different compared to looking at data product content schema both can however be used in getting more familiar with the data product content the latter is also suitable for validating the data stream
| 1
|
3,265
| 6,343,446,393
|
IssuesEvent
|
2017-07-27 17:39:22
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test: System.ServiceProcess.Tests.ServiceControllerTests/StopAndStart failed with "System.ServiceProcess.TimeoutException"
|
area-System.ServiceProcess test-run-desktop
|
Opened on behalf of @Jiayili1
The test `System.ServiceProcess.Tests.ServiceControllerTests/StopAndStart` has failed.
System.ServiceProcess.TimeoutException : Time out has expired and the operation has not been completed.
Stack Trace:
at System.ServiceProcess.ServiceController.WaitForStatus(ServiceControllerStatus desiredStatus, TimeSpan timeout)
at System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart() in E:\A\_work\383\s\corefx\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs:line 184
Build : Master - 20170727.01 (Full Framework Tests)
Failing configurations:
- Windows.10.Amd64.Core-x64
- Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20170727.01/workItem/System.ServiceProcess.ServiceController.Tests/analysis/xunit/System.ServiceProcess.Tests.ServiceControllerTests~2FStopAndStart
|
1.0
|
Test: System.ServiceProcess.Tests.ServiceControllerTests/StopAndStart failed with "System.ServiceProcess.TimeoutException" - Opened on behalf of @Jiayili1
The test `System.ServiceProcess.Tests.ServiceControllerTests/StopAndStart` has failed.
System.ServiceProcess.TimeoutException : Time out has expired and the operation has not been completed.
Stack Trace:
at System.ServiceProcess.ServiceController.WaitForStatus(ServiceControllerStatus desiredStatus, TimeSpan timeout)
at System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart() in E:\A\_work\383\s\corefx\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs:line 184
Build : Master - 20170727.01 (Full Framework Tests)
Failing configurations:
- Windows.10.Amd64.Core-x64
- Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20170727.01/workItem/System.ServiceProcess.ServiceController.Tests/analysis/xunit/System.ServiceProcess.Tests.ServiceControllerTests~2FStopAndStart
|
process
|
test system serviceprocess tests servicecontrollertests stopandstart failed with system serviceprocess timeoutexception opened on behalf of the test system serviceprocess tests servicecontrollertests stopandstart has failed system serviceprocess timeoutexception time out has expired and the operation has not been completed stack trace at system serviceprocess servicecontroller waitforstatus servicecontrollerstatus desiredstatus timespan timeout at system serviceprocess tests servicecontrollertests stopandstart in e a work s corefx src system serviceprocess servicecontroller tests system serviceprocess servicecontroller tests servicecontrollertests cs line build master full framework tests failing configurations windows core debug detail
| 1
|
6,327
| 9,359,704,208
|
IssuesEvent
|
2019-04-02 07:39:04
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Process output namespaces are broken
|
priority/critical-blocking topic/engine topic/processes type/bug
|
Just like the `inputs` of a `Process`, the `outputs` are a `PortNamespace` and so in principle they should work exactly like the inputs of a process. However, the way outputs are currently attached to the a process and are validated when it terminates is broken. For example, the `self.out` method does not accept a dictionary that should be mapped onto the output port namespace of the spec. It expects a single `Data` instance as the value. One could try to access a nested output port by using the `.` character in the output key that is passed to `self.out` but also that will fail, e.g.:
```
self.out('integer.namespace.two', Int(2))
======================================================================
ERROR: test_output_validation_error (aiida.backends.tests.engine.test_process.TestProcess)
Test that a process is marked as failed if its output namespace validation fails.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/tests/engine/test_process.py", line 237, in test_output_validation_error
results, node = run_get_node(TestProcess, add_outputs=orm.Bool(True))
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/launch.py", line 49, in run_get_node
return runner.run_get_node(process, *args, **inputs)
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/runners.py", line 218, in run_get_node
result, node = self._run(process, *args, **inputs)
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/runners.py", line 194, in _run
process.execute()
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/processes.py", line 88, in func_wrapper
return func(self, *args, **kwargs)
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/processes.py", line 1051, in execute
return self.future().result()
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/futures.py", line 34, in result
return super(Future, self).result(timeout)
File "/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
TypeError: Error validating output 'uuid: 4d621fe2-3404-4c8f-8525-8fd15a20b40f (unstored) value: 2' for port 'n.a.m.e.s.p.a.c.e': Unexpected ports {'two': <Int: uuid: 4d621fe2-3404-4c8f-8525-8fd15a20b40f (unstored) value: 2>}, for a non dynamic namespace
```
|
1.0
|
Process output namespaces are broken - Just like the `inputs` of a `Process`, the `outputs` are a `PortNamespace` and so in principle they should work exactly like the inputs of a process. However, the way outputs are currently attached to the a process and are validated when it terminates is broken. For example, the `self.out` method does not accept a dictionary that should be mapped onto the output port namespace of the spec. It expects a single `Data` instance as the value. One could try to access a nested output port by using the `.` character in the output key that is passed to `self.out` but also that will fail, e.g.:
```
self.out('integer.namespace.two', Int(2))
======================================================================
ERROR: test_output_validation_error (aiida.backends.tests.engine.test_process.TestProcess)
Test that a process is marked as failed if its output namespace validation fails.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/tests/engine/test_process.py", line 237, in test_output_validation_error
results, node = run_get_node(TestProcess, add_outputs=orm.Bool(True))
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/launch.py", line 49, in run_get_node
return runner.run_get_node(process, *args, **inputs)
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/runners.py", line 218, in run_get_node
result, node = self._run(process, *args, **inputs)
File "/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/runners.py", line 194, in _run
process.execute()
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/processes.py", line 88, in func_wrapper
return func(self, *args, **kwargs)
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/processes.py", line 1051, in execute
return self.future().result()
File "/home/sphuber/code/aiida/env/dev/plumpy/plumpy/futures.py", line 34, in result
return super(Future, self).result(timeout)
File "/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
TypeError: Error validating output 'uuid: 4d621fe2-3404-4c8f-8525-8fd15a20b40f (unstored) value: 2' for port 'n.a.m.e.s.p.a.c.e': Unexpected ports {'two': <Int: uuid: 4d621fe2-3404-4c8f-8525-8fd15a20b40f (unstored) value: 2>}, for a non dynamic namespace
```
|
process
|
process output namespaces are broken just like the inputs of a process the outputs are a portnamespace and so in principle they should work exactly like the inputs of a process however the way outputs are currently attached to the a process and are validated when it terminates is broken for example the self out method does not accept a dictionary that should be mapped onto the output port namespace of the spec it expects a single data instance as the value one could try to access a nested output port by using the character in the output key that is passed to self out but also that will fail e g self out integer namespace two int error test output validation error aiida backends tests engine test process testprocess test that a process is marked as failed if its output namespace validation fails traceback most recent call last file home sphuber code aiida env dev aiida core aiida backends tests engine test process py line in test output validation error results node run get node testprocess add outputs orm bool true file home sphuber code aiida env dev aiida core aiida engine launch py line in run get node return runner run get node process args inputs file home sphuber code aiida env dev aiida core aiida engine runners py line in run get node result node self run process args inputs file home sphuber code aiida env dev aiida core aiida engine runners py line in run process execute file home sphuber code aiida env dev plumpy plumpy processes py line in func wrapper return func self args kwargs file home sphuber code aiida env dev plumpy plumpy processes py line in execute return self future result file home sphuber code aiida env dev plumpy plumpy futures py line in result return super future self result timeout file home sphuber virtualenvs aiida dev local lib site packages tornado concurrent py line in result raise exc info self exc info file line in raise exc info typeerror error validating output uuid unstored value for port n a m e s p a c e unexpected ports two for a non dynamic namespace
| 1
|
7,717
| 10,821,805,932
|
IssuesEvent
|
2019-11-08 19:35:37
|
microsoft/ptvsd
|
https://api.github.com/repos/microsoft/ptvsd
|
opened
|
Ensure that the `ptvsd_attach` event sets the right name for child processes
|
Bug area:Multiprocessing
|
Currently we use the same name as the parent config for the child process config.
The name for child process should probably be `Process #<id>` or something similar.
|
1.0
|
Ensure that the `ptvsd_attach` event sets the right name for child processes - Currently we use the same name as the parent config for the child process config.
The name for child process should probably be `Process #<id>` or something similar.
|
process
|
ensure that the ptvsd attach event sets the right name for child processes currently we use the same name as the parent config for the child process config the name for child process should probably be process or something similar
| 1
|
367,753
| 10,861,512,231
|
IssuesEvent
|
2019-11-14 11:14:39
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Cannot figure out the SSL issue from the error given
|
Area/StandardLibs Component/HTTP Points/1 Priority/High Type/Bug
|
**Description:**
Source:
https://github.com/ballerina-platform/ballerina-lang/tree/v1.0.2/tests/jballerina-integration-test/src/test/resources/auth/src/authservices
Run command:
```
ballerina build authservices
```
```
java -jar target/bin/authservices.jar --keystore=testtruststore.p12 --truststore=testtruststore.p12 --b7a.config.file=ballerina.conf
```
Error:
```
error: {ballerina/http}GenericListenerError message=Failed to initialize the SSLContext
at ballerina.http.Listener:initEndpoint(service_endpoint.bal:89)
ballerina.http.Listener:init(service_endpoint.bal:83)
ballerina.http.Listener:__init(service_endpoint.bal:60)
```
Cannot figure out the issue with the program from the above error.
**Affected Versions:**
ballerina v1.0.2
|
1.0
|
Cannot figure out the SSL issue from the error given - **Description:**
Source:
https://github.com/ballerina-platform/ballerina-lang/tree/v1.0.2/tests/jballerina-integration-test/src/test/resources/auth/src/authservices
Run command:
```
ballerina build authservices
```
```
java -jar target/bin/authservices.jar --keystore=testtruststore.p12 --truststore=testtruststore.p12 --b7a.config.file=ballerina.conf
```
Error:
```
error: {ballerina/http}GenericListenerError message=Failed to initialize the SSLContext
at ballerina.http.Listener:initEndpoint(service_endpoint.bal:89)
ballerina.http.Listener:init(service_endpoint.bal:83)
ballerina.http.Listener:__init(service_endpoint.bal:60)
```
Cannot figure out the issue with the program from the above error.
**Affected Versions:**
ballerina v1.0.2
|
non_process
|
cannot figure out the ssl issue from the error given description source run command ballerina build authservices java jar target bin authservices jar keystore testtruststore truststore testtruststore config file ballerina conf error error ballerina http genericlistenererror message failed to initialize the sslcontext at ballerina http listener initendpoint service endpoint bal ballerina http listener init service endpoint bal ballerina http listener init service endpoint bal cannot figure out the issue with the program from the above error affected versions ballerina
| 0
|
17,681
| 12,504,213,784
|
IssuesEvent
|
2020-06-02 08:38:39
|
google/web-stories-wp
|
https://api.github.com/repos/google/web-stories-wp
|
opened
|
Use eslint-plugin-jasmine for Karma tests
|
Type: Infrastructure
|
See https://github.com/tlvince/eslint-plugin-jasmine
Most of the recommended rulesets would be useful for us I think. `no-focused-tests` was called out in particular though, as this feature should only be used during development and not checked in.
Note that there are also these custom aliases:
https://github.com/google/web-stories-wp/blob/master/assets/src/edit-story/karma/_init.js#L20-L24
Which means we should consider enabling `jest/no-focused-tests` too.
|
1.0
|
Use eslint-plugin-jasmine for Karma tests - See https://github.com/tlvince/eslint-plugin-jasmine
Most of the recommended rulesets would be useful for us I think. `no-focused-tests` was called out in particular though, as this feature should only be used during development and not checked in.
Note that there are also these custom aliases:
https://github.com/google/web-stories-wp/blob/master/assets/src/edit-story/karma/_init.js#L20-L24
Which means we should consider enabling `jest/no-focused-tests` too.
|
non_process
|
use eslint plugin jasmine for karma tests see most of the recommended rulesets would be useful for us i think no focused tests was called out in particular though as this feature should only be used during development and not checked in note that there are also these custom aliases which means we should consider enabling jest no focused tests too
| 0
|
21,406
| 29,351,205,116
|
IssuesEvent
|
2023-05-27 00:34:37
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Test Analyst (Híbrido - Belo Horizonte) na Coodesh
|
SALVADOR TESTE REQUISITOS CYPRESS PROCESSOS INOVAÇÃO GITHUB CI UMA QUALIDADE TESTES DE SOFTWARE METODOLOGIAS ÁGEIS HIBRIDO AUTOMAÇÃO DE TESTES TESTES MANUAIS ALOCADO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-163516904?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Test Analyst</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos;</li>
<li>Monitorar, priorizar e planejar atividades de teste de qualidade de softwares e hardwares, além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas;</li>
<li>Execução de testes manuais;</li>
<li>Análise das causas raiz das falhas identificadas;</li>
<li>Registro de evidências;</li>
<li>Reports do sistema e gestão de bugs;</li>
<li>Criação e acompanhamento de documentações;</li>
<li>Acompanhamento e validação de Deploys;</li>
<li>Promoção de melhorias contínuas no processo de análise e testes de software.</li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Cypress
- API
- Automação de Testes
## Local:
Belo Horizonte, Minas Gerais, Brazil
## Requisitos:
- Cursando ensino superior ou concluído em Sistemas de Informação, Ciência da Computação e afins;
- Conhecimento básico sobre metodologias ágeis;
- Conhecimento das técnicas e da execução de testes manuais e funcionais;
- Conhecimento em Critérios, Estratégias, Procedimentos e Requisitos de testes;
- Saber escrever bugs reports;
- Experiência em Cypress.
## Diferenciais:
- Conhecimento em testes API (Post, GET, Delete e outros);
- Conhecimento em ferramentas de automação (Cypress).
## Benefícios:
- Vale Refeição - 25,00 o dia trabalhado (Cartão Flash);
- Vale Transporte ou Auxilio Combustível ;
- Assistência Médica após o período de experiência;
- Acesso ao Clube Certo - Clube de Benefícios ;
- Gympass
- Parceria com instituições de ensino (Cursos de graduação e pós graduação);.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-163516904?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Testes/Q.A
|
1.0
|
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Test Analyst (Híbrido - Belo Horizonte) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-163516904?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Test Analyst</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos;</li>
<li>Monitorar, priorizar e planejar atividades de teste de qualidade de softwares e hardwares, além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas;</li>
<li>Execução de testes manuais;</li>
<li>Análise das causas raiz das falhas identificadas;</li>
<li>Registro de evidências;</li>
<li>Reports do sistema e gestão de bugs;</li>
<li>Criação e acompanhamento de documentações;</li>
<li>Acompanhamento e validação de Deploys;</li>
<li>Promoção de melhorias contínuas no processo de análise e testes de software.</li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Cypress
- API
- Automação de Testes
## Local:
Belo Horizonte, Minas Gerais, Brazil
## Requisitos:
- Cursando ensino superior ou concluído em Sistemas de Informação, Ciência da Computação e afins;
- Conhecimento básico sobre metodologias ágeis;
- Conhecimento das técnicas e da execução de testes manuais e funcionais;
- Conhecimento em Critérios, Estratégias, Procedimentos e Requisitos de testes;
- Saber escrever bugs reports;
- Experiência em Cypress.
## Diferenciais:
- Conhecimento em testes API (Post, GET, Delete e outros);
- Conhecimento em ferramentas de automação (Cypress).
## Benefícios:
- Vale Refeição - 25,00 o dia trabalhado (Cartão Flash);
- Vale Transporte ou Auxilio Combustível ;
- Assistência Médica após o período de experiência;
- Acesso ao Clube Certo - Clube de Benefícios ;
- Gympass
- Parceria com instituições de ensino (Cursos de graduação e pós graduação);.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-163516904?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Testes/Q.A
|
process
|
test analyst híbrido belo horizonte na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a prime results está buscando test analyst para compor seu time acreditamos no poder de transformação social realizado pelas empresas acreditamos no poder transformador das pessoas aliado à gestão e tecnologia compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes responsabilidades desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos monitorar priorizar e planejar atividades de teste de qualidade de softwares e hardwares além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas execução de testes manuais análise das causas raiz das falhas identificadas registro de evidências reports do sistema e gestão de bugs criação e acompanhamento de documentações acompanhamento e validação de deploys promoção de melhorias contínuas no processo de análise e testes de software prime results o best seller simon sinek diz que a maioria das empresas sabem o que fazem porém não sabem por que o fazem não é o nosso caso a prime results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade nossos clientes hoje fazem a diferença na vida de mais de brasileiros nas áreas de proteção patrimonial saúde e assistência horas nbsp nosso objetivo central é criar um ambiente criativo dinâmico e engajado sempre aliados a métodos processos inteligentes e muita inovação habilidades cypress api automação de testes local belo horizonte minas gerais brazil requisitos cursando ensino superior ou concluído em sistemas de informação ciência da computação e afins conhecimento básico sobre metodologias ágeis conhecimento das técnicas e da execução de testes manuais e funcionais conhecimento em critérios estratégias procedimentos e requisitos de testes saber escrever bugs reports experiência em cypress diferenciais conhecimento em testes api post get delete e outros conhecimento em ferramentas de automação cypress benefícios vale refeição o dia trabalhado cartão flash vale transporte ou auxilio combustível assistência médica após o período de experiência acesso ao clube certo clube de benefícios gympass parceria com instituições de ensino cursos de graduação e pós graduação como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime clt categoria testes q a
| 1
|
20,181
| 26,738,626,430
|
IssuesEvent
|
2023-01-30 11:17:57
|
OpenEnergyPlatform/open-MaStR
|
https://api.github.com/repos/OpenEnergyPlatform/open-MaStR
|
closed
|
Extend post-processing to all technologies available in raw data
|
:scissors: post processing
|
Missing technology (electricity generation)
- GSGK
- Nuclear
- Storage
|
1.0
|
Extend post-processing to all technologies available in raw data - Missing technology (electricity generation)
- GSGK
- Nuclear
- Storage
|
process
|
extend post processing to all technologies available in raw data missing technology electricity generation gsgk nuclear storage
| 1
|
592,803
| 17,931,130,474
|
IssuesEvent
|
2021-09-10 09:22:59
|
rafinkanisa/ngm-reportDesk
|
https://api.github.com/repos/rafinkanisa/ngm-reportDesk
|
closed
|
RH ET Update Organization Name
|
priority
|
There's a request from partner to update their organization name to the new one.
Check if this organization has an on going project then update their organization name.
previously :
org_name : Ogaden welfare and Development Association
the new one
org_name : Organization for Welfare and Development in Action
org_abbr : OWDA
Kindly share the script here
|
1.0
|
RH ET Update Organization Name - There's a request from partner to update their organization name to the new one.
Check if this organization has an on going project then update their organization name.
previously :
org_name : Ogaden welfare and Development Association
the new one
org_name : Organization for Welfare and Development in Action
org_abbr : OWDA
Kindly share the script here
|
non_process
|
rh et update organization name there s a request from partner to update their organization name to the new one check if this organization has an on going project then update their organization name previously org name ogaden welfare and development association the new one org name organization for welfare and development in action org abbr owda kindly share the script here
| 0
|
19,291
| 25,466,350,934
|
IssuesEvent
|
2022-11-25 05:03:55
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] Getting an error message , when updated the admins in the application
|
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Enter the phone number (Eg: +919999999999999)
5. Click on 'Save' button and Verify
**AR:** Getting an error message as attached in the below screenshot
**ER:** Admins details should be updated only when valid details are filled

|
3.0
|
[IDP] [PM] Getting an error message , when updated the admins in the application - **Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Enter the phone number (Eg: +919999999999999)
5. Click on 'Save' button and Verify
**AR:** Getting an error message as attached in the below screenshot
**ER:** Admins details should be updated only when valid details are filled

|
process
|
getting an error message when updated the admins in the application steps login to pm click on admins tab edit admin in the list enter the phone number eg click on save button and verify ar getting an error message as attached in the below screenshot er admins details should be updated only when valid details are filled
| 1
|
317,533
| 27,243,886,192
|
IssuesEvent
|
2023-02-21 23:18:57
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[test-triage] OTBN block-level tests all failed
|
IP:otbn Component:TestTriage
|
### Hierarchy of regression failure
Block level
### Failure Description
```
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
terminate called after throwing an instance of 'std::runtime_error'
what(): Failed to run command 'initial_secure_wipe': EOF from ISS.
xmsim: *F,SIGUSR: Unix Signal SIGABRT raised from user application code. Stack trace information is captured in file sigusrdump.out.
Stack trace information is captured in file /workspace/0.otbn_smoke/latest/bpad_1427.err
TOOL: xrun(64) 21.09-s006: Exiting on Feb 01, 2023 at 01:23:33 PST (total: 00:00:02)
WARNING: Destroying OtbnTraceChecker object with an unfinished operation.
make: *** [/workspace/mnt/repo_top/hw/dv/tools/dvsim/sim.mk:141: simulate] Error 2
```
### Steps to Reproduce
- GitHub Revision: 618f6081aabc81bea33d2b957954573eab7e4816
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/ip/otbn/dv/uvm/otbn_sim_cfg.hjson -t xcelium --build-seed 3345409474 -i otbn_smoke --fixed-seed 689795600`
- [Nightly regression started on Feb 1 2023 at 08:06 UTC failed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.02.02_05.34.04/report.html), Git revision 618f6081aabc81bea33d2b957954573eab7e4816
- [Nightly regression started on Jan 31 2023 at 08:04 UTC failed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.02.01_04.16.56/report.html), Git revision 2363bcd26bf972b3fe83a863cff854031ee4aa94
- [Nightly regression started on Jan 30 2023 at 08:10 UTC passed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.01.31_06.29.23/report.html), Git revision 91b09f2d4bfa63fbb344f250a7313d922953babc
- Comparison of last passing to first failing revision: https://github.com/lowRISC/opentitan/compare/91b09f2...2363bcd
- Cannot reproduce locally on 618f6081aabc81bea33d2b957954573eab7e4816 with Xcelium 21.09-s006, which according to the logs is the same version used to run the nightly regressions
|
1.0
|
[test-triage] OTBN block-level tests all failed - ### Hierarchy of regression failure
Block level
### Failure Description
```
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
Error when resetting ISS: Failed to run command 'reset': EOF from ISS.
terminate called after throwing an instance of 'std::runtime_error'
what(): Failed to run command 'initial_secure_wipe': EOF from ISS.
xmsim: *F,SIGUSR: Unix Signal SIGABRT raised from user application code. Stack trace information is captured in file sigusrdump.out.
Stack trace information is captured in file /workspace/0.otbn_smoke/latest/bpad_1427.err
TOOL: xrun(64) 21.09-s006: Exiting on Feb 01, 2023 at 01:23:33 PST (total: 00:00:02)
WARNING: Destroying OtbnTraceChecker object with an unfinished operation.
make: *** [/workspace/mnt/repo_top/hw/dv/tools/dvsim/sim.mk:141: simulate] Error 2
```
### Steps to Reproduce
- GitHub Revision: 618f6081aabc81bea33d2b957954573eab7e4816
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/ip/otbn/dv/uvm/otbn_sim_cfg.hjson -t xcelium --build-seed 3345409474 -i otbn_smoke --fixed-seed 689795600`
- [Nightly regression started on Feb 1 2023 at 08:06 UTC failed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.02.02_05.34.04/report.html), Git revision 618f6081aabc81bea33d2b957954573eab7e4816
- [Nightly regression started on Jan 31 2023 at 08:04 UTC failed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.02.01_04.16.56/report.html), Git revision 2363bcd26bf972b3fe83a863cff854031ee4aa94
- [Nightly regression started on Jan 30 2023 at 08:10 UTC passed](https://reports.opentitan.org/hw/ip/otbn/dv/uvm/2023.01.31_06.29.23/report.html), Git revision 91b09f2d4bfa63fbb344f250a7313d922953babc
- Comparison of last passing to first failing revision: https://github.com/lowRISC/opentitan/compare/91b09f2...2363bcd
- Cannot reproduce locally on 618f6081aabc81bea33d2b957954573eab7e4816 with Xcelium 21.09-s006, which according to the logs is the same version used to run the nightly regressions
|
non_process
|
otbn block level tests all failed hierarchy of regression failure block level failure description error when resetting iss failed to run command reset eof from iss error when resetting iss failed to run command reset eof from iss error when resetting iss failed to run command reset eof from iss terminate called after throwing an instance of std runtime error what failed to run command initial secure wipe eof from iss xmsim f sigusr unix signal sigabrt raised from user application code stack trace information is captured in file sigusrdump out stack trace information is captured in file workspace otbn smoke latest bpad err tool xrun exiting on feb at pst total warning destroying otbntracechecker object with an unfinished operation make error steps to reproduce github revision dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw ip otbn dv uvm otbn sim cfg hjson t xcelium build seed i otbn smoke fixed seed git revision git revision git revision comparison of last passing to first failing revision cannot reproduce locally on with xcelium which according to the logs is the same version used to run the nightly regressions
| 0
|
117,860
| 11,957,930,060
|
IssuesEvent
|
2020-04-04 16:05:23
|
sykp241095/gantt-viewer-for-github-project
|
https://api.github.com/repos/sykp241095/gantt-viewer-for-github-project
|
closed
|
说明:所有 Section 的顺序
|
type: documentation
|
顺序如下:
- Search Section:
- Panel List Section:当 Panel 数量小于 5 时,整个 Panel List 变高,高度为 Panel 高度 x Panel 数量,当数量大于 5 时,为定高可滚动。取消现在的根据历史显示的功能,按顺序展示就好。
- Helper Section,包含:
- Guess,默认不可点击,只有识别出 repo 来才可以点击
- Set Time,默认不可点击,只有识别出 issue 详情页才可以点击
- Option Section,包含(按顺序):
- Manage Panels
- User Guide
- Reset Access Token
- User Info Section
<!-- GanttStart: 2020-04-07 -->
<!-- GanttDue: 2020-04-23 -->
|
1.0
|
说明:所有 Section 的顺序 - 顺序如下:
- Search Section:
- Panel List Section:当 Panel 数量小于 5 时,整个 Panel List 变高,高度为 Panel 高度 x Panel 数量,当数量大于 5 时,为定高可滚动。取消现在的根据历史显示的功能,按顺序展示就好。
- Helper Section,包含:
- Guess,默认不可点击,只有识别出 repo 来才可以点击
- Set Time,默认不可点击,只有识别出 issue 详情页才可以点击
- Option Section,包含(按顺序):
- Manage Panels
- User Guide
- Reset Access Token
- User Info Section
<!-- GanttStart: 2020-04-07 -->
<!-- GanttDue: 2020-04-23 -->
|
non_process
|
说明:所有 section 的顺序 顺序如下: search section: panel list section:当 panel 数量小于 时,整个 panel list 变高,高度为 panel 高度 x panel 数量,当数量大于 时,为定高可滚动。取消现在的根据历史显示的功能,按顺序展示就好。 helper section,包含: guess,默认不可点击,只有识别出 repo 来才可以点击 set time,默认不可点击,只有识别出 issue 详情页才可以点击 option section,包含(按顺序): manage panels user guide reset access token user info section
| 0
|
215,087
| 24,126,428,399
|
IssuesEvent
|
2022-09-21 01:09:28
|
smb-h/Estates-price-prediction
|
https://api.github.com/repos/smb-h/Estates-price-prediction
|
opened
|
CVE-2022-35997 (Medium) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2022-35997 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/smb-h/Estates-price-prediction/commit/43d8dec55efbdc71655c52119862fee409624fda">43d8dec55efbdc71655c52119862fee409624fda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `tf.sparse.cross` receives an input `separator` that is not a scalar, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 83dcb4dbfa094e33db084e97c4d0531a559e0ebf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35997>CVE-2022-35997</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-35997 (Medium) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2022-35997 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/smb-h/Estates-price-prediction/commit/43d8dec55efbdc71655c52119862fee409624fda">43d8dec55efbdc71655c52119862fee409624fda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `tf.sparse.cross` receives an input `separator` that is not a scalar, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 83dcb4dbfa094e33db084e97c4d0531a559e0ebf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35997>CVE-2022-35997</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch main vulnerability details tensorflow is an open source platform for machine learning if tf sparse cross receives an input separator that is not a scalar it gives a check fail that can be used to trigger a denial of service attack we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
| 0
|
10,323
| 13,161,791,969
|
IssuesEvent
|
2020-08-10 20:15:03
|
jyn514/saltwater
|
https://api.github.com/repos/jyn514/saltwater
|
closed
|
Cannot use function macros in #if macros
|
bug preprocessor
|
The following code has no output when run with `-E`:
```c
#define f(a) 1
#if f(a)
success
#endif
```
The issue is that I call `replace` on each token individually, not considering the tokens following. Relevant code: https://github.com/jyn514/rcc/blob/12a71429afd9f7c5d2576f509527c8e91dba81e5/src/lex/cpp.rs#L701
The fix is to pass in the following tokens as well as the current token.
|
1.0
|
Cannot use function macros in #if macros - The following code has no output when run with `-E`:
```c
#define f(a) 1
#if f(a)
success
#endif
```
The issue is that I call `replace` on each token individually, not considering the tokens following. Relevant code: https://github.com/jyn514/rcc/blob/12a71429afd9f7c5d2576f509527c8e91dba81e5/src/lex/cpp.rs#L701
The fix is to pass in the following tokens as well as the current token.
|
process
|
cannot use function macros in if macros the following code has no output when run with e c define f a if f a success endif the issue is that i call replace on each token individually not considering the tokens following relevant code the fix is to pass in the following tokens as well as the current token
| 1
|
265,517
| 8,355,415,773
|
IssuesEvent
|
2018-10-02 15:38:48
|
amasson99/JEA-website
|
https://api.github.com/repos/amasson99/JEA-website
|
closed
|
Contact form on front page needs captcha
|
Priority: High Read
|
@amasson99 :
Spammers are hitting our front page contact form very heavily. I suspect this has gone on for some time, but our old email service provider was blocking the spam emails coming to staff@jea.org. GMail is not blocking them, and our staff inbox is getting flooded with spam.
Please address this as soon as possible.
I suggest removing the contact form (provided by the theme?) and replace it with a contact form created with one of the existing form plugins we have. They have support for CAPTCHA, whereas the default contact form does not. Also, investigate how we can evoke Akismet to prevent form spam.
|
1.0
|
Contact form on front page needs captcha - @amasson99 :
Spammers are hitting our front page contact form very heavily. I suspect this has gone on for some time, but our old email service provider was blocking the spam emails coming to staff@jea.org. GMail is not blocking them, and our staff inbox is getting flooded with spam.
Please address this as soon as possible.
I suggest removing the contact form (provided by the theme?) and replace it with a contact form created with one of the existing form plugins we have. They have support for CAPTCHA, whereas the default contact form does not. Also, investigate how we can evoke Akismet to prevent form spam.
|
non_process
|
contact form on front page needs captcha spammers are hitting our front page contact form very heavily i suspect this has gone on for some time but our old email service provider was blocking the spam emails coming to staff jea org gmail is not blocking them and our staff inbox is getting flooded with spam please address this as soon as possible i suggest removing the contact form provided by the theme and replace it with a contact form created with one of the existing form plugins we have they have support for captcha whereas the default contact form does not also investigate how we can evoke akismet to prevent form spam
| 0
|
18,568
| 24,555,946,723
|
IssuesEvent
|
2022-10-12 15:52:39
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline indicator] There should be consistency in offline error message which is getting displayed for terms and privacy policy screen
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
There should be consistency in offline error message which is getting displayed for terms and privacy policy screen
i.e., There are two different screens which are getting dispalyed for terms and privacy policy screens
**First screen:**

**Second screen:**

|
3.0
|
[Android] [Offline indicator] There should be consistency in offline error message which is getting displayed for terms and privacy policy screen - There should be consistency in offline error message which is getting displayed for terms and privacy policy screen
i.e., There are two different screens which are getting dispalyed for terms and privacy policy screens
**First screen:**

**Second screen:**

|
process
|
there should be consistency in offline error message which is getting displayed for terms and privacy policy screen there should be consistency in offline error message which is getting displayed for terms and privacy policy screen i e there are two different screens which are getting dispalyed for terms and privacy policy screens first screen second screen
| 1
|
175,567
| 27,880,958,839
|
IssuesEvent
|
2023-03-21 19:21:27
|
BuilderIO/qwik
|
https://api.github.com/repos/BuilderIO/qwik
|
reopened
|
`useClientEffect$()` code is not triggered when loading fragments dynamically
|
bug needs design Priority
|
### Qwik Version
0.11.1
### Operating System (or Browser)
Any
### Node Version (if applicable)
Any
### Which component is affected?
Qwik Runtime
### Expected Behaviour
At Cloudflare we are developing an approach where we load new Qwik applications (fragments) into the browser at runtime.
When the new fragment is loaded after the qwik-loader has already completed, we still expect code that is marked as `eagerness: "visible"` (the default) to have an intersection observer attached and therefore run when the component is visible.
### Actual Behaviour
The qwik-loader only runs and finds code that is marked as `eagerness: "visible"` when the document `readystatechange` event is triggered and not when subsequent qwik applications are added to the DOM dynamically.
### Additional Information
The workaround is to mark such code with `eagerness: "load"` and then dispatch a `q:init` event onto the document, which triggers these code blocks to run. (It probably causes a bunch of other blocks to run a second time, which is unfortunate.)
Ideally there should be a way to tell the qwik-loader that more qwik code has been added to the DOM that needs to be initialized and wired up.
|
1.0
|
`useClientEffect$()` code is not triggered when loading fragments dynamically - ### Qwik Version
0.11.1
### Operating System (or Browser)
Any
### Node Version (if applicable)
Any
### Which component is affected?
Qwik Runtime
### Expected Behaviour
At Cloudflare we are developing an approach where we load new Qwik applications (fragments) into the browser at runtime.
When the new fragment is loaded after the qwik-loader has already completed, we still expect code that is marked as `eagerness: "visible"` (the default) to have an intersection observer attached and therefore run when the component is visible.
### Actual Behaviour
The qwik-loader only runs and finds code that is marked as `eagerness: "visible"` when the document `readystatechange` event is triggered and not when subsequent qwik applications are added to the DOM dynamically.
### Additional Information
The workaround is to mark such code with `eagerness: "load"` and then dispatch a `q:init` event onto the document, which triggers these code blocks to run. (It probably causes a bunch of other blocks to run a second time, which is unfortunate.)
Ideally there should be a way to tell the qwik-loader that more qwik code has been added to the DOM that needs to be initialized and wired up.
|
non_process
|
useclienteffect code is not triggered when loading fragments dynamically qwik version operating system or browser any node version if applicable any which component is affected qwik runtime expected behaviour at cloudflare we are developing an approach where we load new qwik applications fragments into the browser at runtime when the new fragment is loaded after the qwik loader has already completed we still expect code that is marked as eagerness visible the default to have an intersection observer attached and therefore run when the component is visible actual behaviour the qwik loader only runs and finds code that is marked as eagerness visible when the document readystatechange event is triggered and not when subsequent qwik applications are added to the dom dynamically additional information the workaround is to mark such code with eagerness load and then dispatch a q init event onto the document which triggers these code blocks to run it probably causes a bunch of other blocks to run a second time which is unfortunate ideally there should be a way to tell the qwik loader that more qwik code has been added to the dom that needs to be initialized and wired up
| 0
|
14,478
| 17,599,425,881
|
IssuesEvent
|
2021-08-17 09:55:13
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] 需要兼容百度小程序模版里不用{{}}包裹控制属性
|
processing
|
**问题描述**
1. 百度小程序,模版中不用{{}}包裹控制属性 , 执行有误
2. 复现demo

3. 源码有todo标记

|
1.0
|
[Bug report] 需要兼容百度小程序模版里不用{{}}包裹控制属性 - **问题描述**
1. 百度小程序,模版中不用{{}}包裹控制属性 , 执行有误
2. 复现demo

3. 源码有todo标记

|
process
|
需要兼容百度小程序模版里不用 包裹控制属性 问题描述 百度小程序 模版中不用 包裹控制属性 执行有误 复现demo 源码有todo标记
| 1
|
8,565
| 11,737,590,235
|
IssuesEvent
|
2020-03-11 14:50:59
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
opened
|
Remove completed messages from Service Bus
|
EPIC - Auto Batch Process :oncoming_automobile:
|
## Remove completed messages from Service Bus
As a drug scientist creator regulator person
I want to upload new or destroy old documents
So that patients have access to the best data
## Acceptance Criteria
_Fill/delete these as appropriate. Add multiple criteria under a heading if necessary.
### Customer acceptance criteria
- [ ] Jobs for creation are eventually picked up
- [ ] Jobs for deletion are eventually picked up
### Technical acceptance criteria
- [ ] We remove from the Service Bus messages which we know have been successful
- [ ] We retain on the Service Bus messages which we do not know have been successful
### Testing acceptance criteria
- [ ] Tests cover the removal of messages
- [ ] Tests cover the ability to pick up a message twice when removal hasn't occurred
## Data - Potential impact
**Size**
S
**Value**
Huge
**Effort**
Is this different from Size?
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
Remove completed messages from Service Bus - ## Remove completed messages from Service Bus
As a drug scientist creator regulator person
I want to upload new or destroy old documents
So that patients have access to the best data
## Acceptance Criteria
_Fill/delete these as appropriate. Add multiple criteria under a heading if necessary.
### Customer acceptance criteria
- [ ] Jobs for creation are eventually picked up
- [ ] Jobs for deletion are eventually picked up
### Technical acceptance criteria
- [ ] We remove from the Service Bus messages which we know have been successful
- [ ] We retain on the Service Bus messages which we do not know have been successful
### Testing acceptance criteria
- [ ] Tests cover the removal of messages
- [ ] Tests cover the ability to pick up a message twice when removal hasn't occurred
## Data - Potential impact
**Size**
S
**Value**
Huge
**Effort**
Is this different from Size?
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
remove completed messages from service bus remove completed messages from service bus as a drug scientist creator regulator person i want to upload new or destroy old documents so that patients have access to the best data acceptance criteria fill delete these as appropriate add multiple criteria under a heading if necessary customer acceptance criteria jobs for creation are eventually picked up jobs for deletion are eventually picked up technical acceptance criteria we remove from the service bus messages which we know have been successful we retain on the service bus messages which we do not know have been successful testing acceptance criteria tests cover the removal of messages tests cover the ability to pick up a message twice when removal hasn t occurred data potential impact size s value huge effort is this different from size exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
329,871
| 28,312,225,917
|
IssuesEvent
|
2023-04-10 16:24:18
|
nrwl/nx
|
https://api.github.com/repos/nrwl/nx
|
closed
|
nx e2e --spec command detects the correct files, but only passes the first file it finds to Cypress
|
type: bug scope: testing tools
|
### Current Behavior
When using the `--spec` flag with `nx e2e`, nx correctly identifies all files matching the glob, but only passes the first file to Cypress for testing. The underlying app is a Next JS app.
Folder structure:
```
apps/
└── e2e/
└── client-e2e/
├── src/
│ ├── e2e/
│ │ └── iot/
│ │ ├── history.spec.ts
│ │ └── layout.spec.ts
│ ├── fixtures
│ ├── plugins
│ └── support
├── cypress.config.ts
├── project.json
└── tsconfig.json
```
Terminal:
```
nx e2e client-e2e --spec **/e2e/iot/**/*
> nx run client-e2e:e2e --spec apps/e2e/client-e2e/src/e2e/iot/history.spec.ts apps/e2e/client-e2e/src/e2e/iot/layout.spec.ts
========================================================================
(Run Starting)
Cypress: 12.8.1
Browser: Electron 106 (headless)
Node Version: v18.13.0 (/usr/local/bin/node)
Specs: 1 found (history.spec.ts)
Searched: src/e2e/iot/history.spec.ts
> Test logs obfuscated
========================================================================
(Run Finished)
Spec Tests Passing Failing Pending Skipped
✖ history.spec.ts 00:09 3 2 1 - -
✖ 1 of 1 failed (100%) 00:09 3 2 1 - -
> NX Ran target e2e for project client-e2e (19s)
With additional flags:
apps/e2e/client-e2e/src/e2e/iot/layout.spec.ts
--spec=apps/e2e/client-e2e/src/e2e/iot/history.spec.ts
✖ 1/1 failed
✔ 0/1 succeeded [0 read from cache]
```
Looks like the `--spec` flag is not added to all files when passing to Cypress, just the first one found.
cypress.config.ts
```ts
const cypressJsonConfig = {
fileServerFolder: ".",
fixturesFolder: "./src/fixtures",
video: true,
videosFolder: "../../../dist/cypress/apps/client-e2e/videos",
screenshotsFolder: "../../../dist/cypress/apps/client-e2e/screenshots",
chromeWebSecurity: false,
specPattern: "src/e2e/**/*.spec.{js,jsx,ts,tsx}",
supportFile: "src/support/e2e.ts",
downloadsFolder: "cypress/downloads",
/**
* TODO(@nrwl/cypress): In Cypress v12,the testIsolation option is turned on by default.
* This can cause tests to start breaking where not indended.
* You should consider enabling this once you verify tests do not depend on each other
* More Info: https://docs.cypress.io/guides/references/migration-guide#Test-Isolation
**/
testIsolation: false,
};
export default defineConfig({
e2e: {
...nxE2EPreset(__dirname),
...cypressJsonConfig,
setupNodeEvents(on, config) {
on("task", {
readXlsx: (cfg) => read(cfg),
removeFolder: (cfg) => removeFolder(cfg),
getFiles: (cfg) => getFiles(cfg),
});
},
},
});
```
project.json
```json
{
"name": "client-e2e",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "apps/e2e/client-e2e/src",
"projectType": "application",
"targets": {
"e2e": {
"executor": "@nrwl/cypress:cypress",
"options": {
"cypressConfig": "apps/e2e/client-e2e/cypress.config.ts",
"devServerTarget": "client:serve",
"testingType": "e2e"
},
"configurations": {
"production": {
"devServerTarget": "client:serve:production"
}
}
},
"lint": {
"executor": "@nrwl/linter:eslint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["apps/client-e2e/**/*.{js,ts,jsx,tsx}"]
}
}
},
"tags": [],
"implicitDependencies": ["client"]
}
```
### Expected Behavior
All found files should be passed to Cypress for testing.
### GitHub Repo
_No response_
### Steps to Reproduce
1. Create a new E2E project
2. Add multiple tests
3. Try to run them via `nx e2e --spec`
### Nx Report
```shell
Node : 18.13.0
OS : linux x64
yarn : 1.22.19
nx : 15.8.9
@nrwl/js : 15.8.9
@nrwl/jest : 15.8.9
@nrwl/linter : 15.8.9
@nrwl/workspace : 15.8.9
@nrwl/cli : 15.8.9
@nrwl/cypress : 15.8.9
@nrwl/devkit : 15.8.9
@nrwl/eslint-plugin-nx : 15.8.9
@nrwl/express : 15.8.9
@nrwl/next : 15.8.9
@nrwl/node : 15.8.9
@nrwl/nx-plugin : 15.8.9
@nrwl/react : 15.8.9
@nrwl/rollup : 15.8.9
@nrwl/storybook : 15.8.9
@nrwl/tao : 15.8.9
@nrwl/web : 15.8.9
@nrwl/webpack : 15.8.9
typescript : 4.9.5
---------------------------------------
Community plugins:
rxjs : 6.6.7
```
### Failure Logs
_No response_
### Additional Information
_No response_
|
1.0
|
nx e2e --spec command detects the correct files, but only passes the first file it finds to Cypress - ### Current Behavior
When using the `--spec` flag with `nx e2e`, nx correctly identifies all files matching the glob, but only passes the first file to Cypress for testing. The underlying app is a Next JS app.
Folder structure:
```
apps/
└── e2e/
└── client-e2e/
├── src/
│ ├── e2e/
│ │ └── iot/
│ │ ├── history.spec.ts
│ │ └── layout.spec.ts
│ ├── fixtures
│ ├── plugins
│ └── support
├── cypress.config.ts
├── project.json
└── tsconfig.json
```
Terminal:
```
nx e2e client-e2e --spec **/e2e/iot/**/*
> nx run client-e2e:e2e --spec apps/e2e/client-e2e/src/e2e/iot/history.spec.ts apps/e2e/client-e2e/src/e2e/iot/layout.spec.ts
========================================================================
(Run Starting)
Cypress: 12.8.1
Browser: Electron 106 (headless)
Node Version: v18.13.0 (/usr/local/bin/node)
Specs: 1 found (history.spec.ts)
Searched: src/e2e/iot/history.spec.ts
> Test logs obfuscated
========================================================================
(Run Finished)
Spec Tests Passing Failing Pending Skipped
✖ history.spec.ts 00:09 3 2 1 - -
✖ 1 of 1 failed (100%) 00:09 3 2 1 - -
> NX Ran target e2e for project client-e2e (19s)
With additional flags:
apps/e2e/client-e2e/src/e2e/iot/layout.spec.ts
--spec=apps/e2e/client-e2e/src/e2e/iot/history.spec.ts
✖ 1/1 failed
✔ 0/1 succeeded [0 read from cache]
```
Looks like the `--spec` flag is not added to all files when passing to Cypress, just the first one found.
cypress.config.ts
```ts
const cypressJsonConfig = {
fileServerFolder: ".",
fixturesFolder: "./src/fixtures",
video: true,
videosFolder: "../../../dist/cypress/apps/client-e2e/videos",
screenshotsFolder: "../../../dist/cypress/apps/client-e2e/screenshots",
chromeWebSecurity: false,
specPattern: "src/e2e/**/*.spec.{js,jsx,ts,tsx}",
supportFile: "src/support/e2e.ts",
downloadsFolder: "cypress/downloads",
/**
* TODO(@nrwl/cypress): In Cypress v12,the testIsolation option is turned on by default.
* This can cause tests to start breaking where not indended.
* You should consider enabling this once you verify tests do not depend on each other
* More Info: https://docs.cypress.io/guides/references/migration-guide#Test-Isolation
**/
testIsolation: false,
};
export default defineConfig({
e2e: {
...nxE2EPreset(__dirname),
...cypressJsonConfig,
setupNodeEvents(on, config) {
on("task", {
readXlsx: (cfg) => read(cfg),
removeFolder: (cfg) => removeFolder(cfg),
getFiles: (cfg) => getFiles(cfg),
});
},
},
});
```
project.json
```json
{
"name": "client-e2e",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "apps/e2e/client-e2e/src",
"projectType": "application",
"targets": {
"e2e": {
"executor": "@nrwl/cypress:cypress",
"options": {
"cypressConfig": "apps/e2e/client-e2e/cypress.config.ts",
"devServerTarget": "client:serve",
"testingType": "e2e"
},
"configurations": {
"production": {
"devServerTarget": "client:serve:production"
}
}
},
"lint": {
"executor": "@nrwl/linter:eslint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["apps/client-e2e/**/*.{js,ts,jsx,tsx}"]
}
}
},
"tags": [],
"implicitDependencies": ["client"]
}
```
### Expected Behavior
All found files should be passed to Cypress for testing.
### GitHub Repo
_No response_
### Steps to Reproduce
1. Create a new E2E project
2. Add multiple tests
3. Try to run them via `nx e2e --spec`
### Nx Report
```shell
Node : 18.13.0
OS : linux x64
yarn : 1.22.19
nx : 15.8.9
@nrwl/js : 15.8.9
@nrwl/jest : 15.8.9
@nrwl/linter : 15.8.9
@nrwl/workspace : 15.8.9
@nrwl/cli : 15.8.9
@nrwl/cypress : 15.8.9
@nrwl/devkit : 15.8.9
@nrwl/eslint-plugin-nx : 15.8.9
@nrwl/express : 15.8.9
@nrwl/next : 15.8.9
@nrwl/node : 15.8.9
@nrwl/nx-plugin : 15.8.9
@nrwl/react : 15.8.9
@nrwl/rollup : 15.8.9
@nrwl/storybook : 15.8.9
@nrwl/tao : 15.8.9
@nrwl/web : 15.8.9
@nrwl/webpack : 15.8.9
typescript : 4.9.5
---------------------------------------
Community plugins:
rxjs : 6.6.7
```
### Failure Logs
_No response_
### Additional Information
_No response_
|
non_process
|
nx spec command detects the correct files but only passes the first file it finds to cypress current behavior when using the spec flag with nx nx correctly identifies all files matching the glob but only passes the first file to cypress for testing the underlying app is a next js app folder structure apps └── └── client ├── src │ ├── │ │ └── iot │ │ ├── history spec ts │ │ └── layout spec ts │ ├── fixtures │ ├── plugins │ └── support ├── cypress config ts ├── project json └── tsconfig json terminal nx client spec iot nx run client spec apps client src iot history spec ts apps client src iot layout spec ts run starting cypress browser electron headless node version usr local bin node specs found history spec ts searched src iot history spec ts test logs obfuscated run finished spec tests passing failing pending skipped ✖ history spec ts ✖ of failed nx ran target for project client with additional flags apps client src iot layout spec ts spec apps client src iot history spec ts ✖ failed ✔ succeeded looks like the spec flag is not added to all files when passing to cypress just the first one found cypress config ts ts const cypressjsonconfig fileserverfolder fixturesfolder src fixtures video true videosfolder dist cypress apps client videos screenshotsfolder dist cypress apps client screenshots chromewebsecurity false specpattern src spec js jsx ts tsx supportfile src support ts downloadsfolder cypress downloads todo nrwl cypress in cypress the testisolation option is turned on by default this can cause tests to start breaking where not indended you should consider enabling this once you verify tests do not depend on each other more info testisolation false export default defineconfig dirname cypressjsonconfig setupnodeevents on config on task readxlsx cfg read cfg removefolder cfg removefolder cfg getfiles cfg getfiles cfg project json json name client schema node modules nx schemas project schema json sourceroot apps client src projecttype application targets executor nrwl cypress cypress options cypressconfig apps client cypress config ts devservertarget client serve testingtype configurations production devservertarget client serve production lint executor nrwl linter eslint outputs options lintfilepatterns tags implicitdependencies expected behavior all found files should be passed to cypress for testing github repo no response steps to reproduce create a new project add multiple tests try to run them via nx spec nx report shell node os linux yarn nx nrwl js nrwl jest nrwl linter nrwl workspace nrwl cli nrwl cypress nrwl devkit nrwl eslint plugin nx nrwl express nrwl next nrwl node nrwl nx plugin nrwl react nrwl rollup nrwl storybook nrwl tao nrwl web nrwl webpack typescript community plugins rxjs failure logs no response additional information no response
| 0
|
14,183
| 17,089,952,029
|
IssuesEvent
|
2021-07-08 16:07:01
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
[BUG] Angel's Bio Processing non-recoverable error
|
Angels Bio Processing Impact: Bug
|
**Describe the bug**
This bug occours in a multiplayer game when you try to select an item from another player's inventory
**To Reproduce**
1. Game Version: **1.1.35**
2. Modist:
[AutoTrash](https://mods.factorio.com/mod/AutoTrash) v5.3.13
[BlueprintTools](https://mods.factorio.com/mod/BlueprintTools) v1.0.0
[BottleneckLite](https://mods.factorio.com/mod/BottleneckLite) v1.2.0
[Clockwork](https://mods.factorio.com/mod/Clockwork) v1.1.0
[Clowns-AngelBob-Nuclear](https://mods.factorio.com/mod/Clowns-AngelBob-Nuclear) v1.1.18
[Clowns-Extended-Minerals](https://mods.factorio.com/mod/Clowns-Extended-Minerals) v1.1.23
[Clowns-Nuclear](https://mods.factorio.com/mod/Clowns-Nuclear) v1.3.15
[Clowns-Processing](https://mods.factorio.com/mod/Clowns-Processing) v1.3.15
[CopyPasteModules](https://mods.factorio.com/mod/CopyPasteModules) v0.0.6
[DiscoScience](https://mods.factorio.com/mod/DiscoScience) v1.1.3
[LoaderRedux](https://mods.factorio.com/mod/LoaderRedux) v1.7.1
[MatrixDJ96-Angel-Modpack](https://mods.factorio.com/mod/MatrixDJ96-Angel-Modpack) v0.0.4
[MatrixDJ96-Bob-Modpack](https://mods.factorio.com/mod/MatrixDJ96-Bob-Modpack) v0.0.3
[MatrixDJ96-NoSteamInserter](https://mods.factorio.com/mod/MatrixDJ96-NoSteamInserter) v0.0.1
[MatrixDJ96-QOL-Modpack](https://mods.factorio.com/mod/MatrixDJ96-QOL-Modpack) v0.0.3
[ModuleInserter](https://mods.factorio.com/mod/ModuleInserter) v5.2.4
[PickerDollies](https://mods.factorio.com/mod/PickerDollies) v1.1.6
[Placeables](https://mods.factorio.com/mod/Placeables) v1.0.0
[Rampant](https://mods.factorio.com/mod/Rampant) v1.1.1
[RateCalculator](https://mods.factorio.com/mod/RateCalculator) v2.2.0
[RecipeBook](https://mods.factorio.com/mod/RecipeBook) v2.7.1
[StatsGui](https://mods.factorio.com/mod/StatsGui) v1.2.0
[Todo-List](https://mods.factorio.com/mod/Todo-List) v19.2.0
[UI_Hotkeys](https://mods.factorio.com/mod/UI_Hotkeys) v1.0.1
[VehicleSnap](https://mods.factorio.com/mod/VehicleSnap) v1.18.4
[WoodDoesBurn](https://mods.factorio.com/mod/WoodDoesBurn) v1.1.0
[YARM](https://mods.factorio.com/mod/YARM) v0.8.203
[aai-industry](https://mods.factorio.com/mod/aai-industry) v0.5.9
[alien-biomes](https://mods.factorio.com/mod/alien-biomes) v0.6.5
[alien-biomes-hr-terrain](https://mods.factorio.com/mod/alien-biomes-hr-terrain) v0.6.1
[angelsaddons-cab](https://mods.factorio.com/mod/angelsaddons-cab) v0.2.7
[angelsaddons-mobility](https://mods.factorio.com/mod/angelsaddons-mobility) v0.0.8
[angelsaddons-storage](https://mods.factorio.com/mod/angelsaddons-storage) v0.0.6
[angelsbioprocessing](https://mods.factorio.com/mod/angelsbioprocessing) v0.7
[angelsexploration](https://mods.factorio.com/mod/angelsexploration) v0.3.10
[angelsindustries](https://mods.factorio.com/mod/angelsindustries) v0.4.13
[angelsinfiniteores](https://mods.factorio.com/mod/angelsinfiniteores) v0.9.9
[angelspetrochem](https://mods.factorio.com/mod/angelspetrochem) v0.9.19
[angelsrefining](https://mods.factorio.com/mod/angelsrefining) v0.11.21
[angelssmelting](https://mods.factorio.com/mod/angelssmelting) v0.6.16
[baraws](https://mods.factorio.com/mod/baraws) v1.1.1
[bobassembly](https://mods.factorio.com/mod/bobassembly) v1.1.3
[bobelectronics](https://mods.factorio.com/mod/bobelectronics) v1.1.3
[bobenemies](https://mods.factorio.com/mod/bobenemies) v1.1.1
[bobequipment](https://mods.factorio.com/mod/bobequipment) v1.1.2
[bobgreenhouse](https://mods.factorio.com/mod/bobgreenhouse) v1.1.0
[bobinserters](https://mods.factorio.com/mod/bobinserters) v1.1.0
[boblibrary](https://mods.factorio.com/mod/boblibrary) v1.1.4
[boblogistics](https://mods.factorio.com/mod/boblogistics) v1.1.3
[bobmining](https://mods.factorio.com/mod/bobmining) v1.1.3
[bobmodules](https://mods.factorio.com/mod/bobmodules) v1.1.2
[bobores](https://mods.factorio.com/mod/bobores) v1.1.3
[bobplates](https://mods.factorio.com/mod/bobplates) v1.1.3
[bobpower](https://mods.factorio.com/mod/bobpower) v1.1.3
[bobrevamp](https://mods.factorio.com/mod/bobrevamp) v1.1.3
[bobtech](https://mods.factorio.com/mod/bobtech) v1.1.3
[bobvehicleequipment](https://mods.factorio.com/mod/bobvehicleequipment) v1.1.2
[bobwarfare](https://mods.factorio.com/mod/bobwarfare) v1.1.3
[bullet-trails](https://mods.factorio.com/mod/bullet-trails) v0.6.1
[calculator-ui](https://mods.factorio.com/mod/calculator-ui) v1.1.1
[car-finder](https://mods.factorio.com/mod/car-finder) v1.5.2
[even-distribution](https://mods.factorio.com/mod/even-distribution) v1.0.8
[factoryplanner](https://mods.factorio.com/mod/factoryplanner) v1.1.21
[flib](https://mods.factorio.com/mod/flib) v0.7.0
[guiManager](https://mods.factorio.com/mod/guiManager) v0.1.5
[power-grid-comb](https://mods.factorio.com/mod/power-grid-comb) v1.1.0
[reskins-angels](https://mods.factorio.com/mod/reskins-angels) v1.1.8
[reskins-bobs](https://mods.factorio.com/mod/reskins-bobs) v1.1.10
[reskins-compatibility](https://mods.factorio.com/mod/reskins-compatibility) v1.1.4
[reskins-library](https://mods.factorio.com/mod/reskins-library) v1.1.7
[rso-mod](https://mods.factorio.com/mod/rso-mod) v6.2.8
[stdlib](https://mods.factorio.com/mod/stdlib) v1.4.6
[textplates](https://mods.factorio.com/mod/textplates) v0.6.3
**Screenshots**
<img src="https://user-images.githubusercontent.com/8330294/124889504-7216ed80-dfd7-11eb-90bd-2023caf5a6b1.png" width="75%">
**Possible solution**
On event ***on_player_cursor_stack_changed*** there this code at line **109**
```lua
local player = game.get_player(event.player_index)
local opened_entity = player.opened
if not (opened_entity and opened_entity.valid and opened_entity.object_name ~= 'LuaEquipmentGrid' and
(opened_entity.type == "lab" or opened_entity.type == "mining-drill")) then return end
```
According to the [documentation](https://lua-api.factorio.com/latest/LuaControl.html#LuaControl.opened) i think that the easy way to fix this bug is to change the _if condition_ from ```opened_entity.object_name ~= 'LuaEquipmentGrid'``` to ```opened_entity.object_name == 'LuaEntity'``` because only **LuaEntity** has the key type (```opened_entity.type```)
|
1.0
|
[BUG] Angel's Bio Processing non-recoverable error - **Describe the bug**
This bug occours in a multiplayer game when you try to select an item from another player's inventory
**To Reproduce**
1. Game Version: **1.1.35**
2. Modist:
[AutoTrash](https://mods.factorio.com/mod/AutoTrash) v5.3.13
[BlueprintTools](https://mods.factorio.com/mod/BlueprintTools) v1.0.0
[BottleneckLite](https://mods.factorio.com/mod/BottleneckLite) v1.2.0
[Clockwork](https://mods.factorio.com/mod/Clockwork) v1.1.0
[Clowns-AngelBob-Nuclear](https://mods.factorio.com/mod/Clowns-AngelBob-Nuclear) v1.1.18
[Clowns-Extended-Minerals](https://mods.factorio.com/mod/Clowns-Extended-Minerals) v1.1.23
[Clowns-Nuclear](https://mods.factorio.com/mod/Clowns-Nuclear) v1.3.15
[Clowns-Processing](https://mods.factorio.com/mod/Clowns-Processing) v1.3.15
[CopyPasteModules](https://mods.factorio.com/mod/CopyPasteModules) v0.0.6
[DiscoScience](https://mods.factorio.com/mod/DiscoScience) v1.1.3
[LoaderRedux](https://mods.factorio.com/mod/LoaderRedux) v1.7.1
[MatrixDJ96-Angel-Modpack](https://mods.factorio.com/mod/MatrixDJ96-Angel-Modpack) v0.0.4
[MatrixDJ96-Bob-Modpack](https://mods.factorio.com/mod/MatrixDJ96-Bob-Modpack) v0.0.3
[MatrixDJ96-NoSteamInserter](https://mods.factorio.com/mod/MatrixDJ96-NoSteamInserter) v0.0.1
[MatrixDJ96-QOL-Modpack](https://mods.factorio.com/mod/MatrixDJ96-QOL-Modpack) v0.0.3
[ModuleInserter](https://mods.factorio.com/mod/ModuleInserter) v5.2.4
[PickerDollies](https://mods.factorio.com/mod/PickerDollies) v1.1.6
[Placeables](https://mods.factorio.com/mod/Placeables) v1.0.0
[Rampant](https://mods.factorio.com/mod/Rampant) v1.1.1
[RateCalculator](https://mods.factorio.com/mod/RateCalculator) v2.2.0
[RecipeBook](https://mods.factorio.com/mod/RecipeBook) v2.7.1
[StatsGui](https://mods.factorio.com/mod/StatsGui) v1.2.0
[Todo-List](https://mods.factorio.com/mod/Todo-List) v19.2.0
[UI_Hotkeys](https://mods.factorio.com/mod/UI_Hotkeys) v1.0.1
[VehicleSnap](https://mods.factorio.com/mod/VehicleSnap) v1.18.4
[WoodDoesBurn](https://mods.factorio.com/mod/WoodDoesBurn) v1.1.0
[YARM](https://mods.factorio.com/mod/YARM) v0.8.203
[aai-industry](https://mods.factorio.com/mod/aai-industry) v0.5.9
[alien-biomes](https://mods.factorio.com/mod/alien-biomes) v0.6.5
[alien-biomes-hr-terrain](https://mods.factorio.com/mod/alien-biomes-hr-terrain) v0.6.1
[angelsaddons-cab](https://mods.factorio.com/mod/angelsaddons-cab) v0.2.7
[angelsaddons-mobility](https://mods.factorio.com/mod/angelsaddons-mobility) v0.0.8
[angelsaddons-storage](https://mods.factorio.com/mod/angelsaddons-storage) v0.0.6
[angelsbioprocessing](https://mods.factorio.com/mod/angelsbioprocessing) v0.7
[angelsexploration](https://mods.factorio.com/mod/angelsexploration) v0.3.10
[angelsindustries](https://mods.factorio.com/mod/angelsindustries) v0.4.13
[angelsinfiniteores](https://mods.factorio.com/mod/angelsinfiniteores) v0.9.9
[angelspetrochem](https://mods.factorio.com/mod/angelspetrochem) v0.9.19
[angelsrefining](https://mods.factorio.com/mod/angelsrefining) v0.11.21
[angelssmelting](https://mods.factorio.com/mod/angelssmelting) v0.6.16
[baraws](https://mods.factorio.com/mod/baraws) v1.1.1
[bobassembly](https://mods.factorio.com/mod/bobassembly) v1.1.3
[bobelectronics](https://mods.factorio.com/mod/bobelectronics) v1.1.3
[bobenemies](https://mods.factorio.com/mod/bobenemies) v1.1.1
[bobequipment](https://mods.factorio.com/mod/bobequipment) v1.1.2
[bobgreenhouse](https://mods.factorio.com/mod/bobgreenhouse) v1.1.0
[bobinserters](https://mods.factorio.com/mod/bobinserters) v1.1.0
[boblibrary](https://mods.factorio.com/mod/boblibrary) v1.1.4
[boblogistics](https://mods.factorio.com/mod/boblogistics) v1.1.3
[bobmining](https://mods.factorio.com/mod/bobmining) v1.1.3
[bobmodules](https://mods.factorio.com/mod/bobmodules) v1.1.2
[bobores](https://mods.factorio.com/mod/bobores) v1.1.3
[bobplates](https://mods.factorio.com/mod/bobplates) v1.1.3
[bobpower](https://mods.factorio.com/mod/bobpower) v1.1.3
[bobrevamp](https://mods.factorio.com/mod/bobrevamp) v1.1.3
[bobtech](https://mods.factorio.com/mod/bobtech) v1.1.3
[bobvehicleequipment](https://mods.factorio.com/mod/bobvehicleequipment) v1.1.2
[bobwarfare](https://mods.factorio.com/mod/bobwarfare) v1.1.3
[bullet-trails](https://mods.factorio.com/mod/bullet-trails) v0.6.1
[calculator-ui](https://mods.factorio.com/mod/calculator-ui) v1.1.1
[car-finder](https://mods.factorio.com/mod/car-finder) v1.5.2
[even-distribution](https://mods.factorio.com/mod/even-distribution) v1.0.8
[factoryplanner](https://mods.factorio.com/mod/factoryplanner) v1.1.21
[flib](https://mods.factorio.com/mod/flib) v0.7.0
[guiManager](https://mods.factorio.com/mod/guiManager) v0.1.5
[power-grid-comb](https://mods.factorio.com/mod/power-grid-comb) v1.1.0
[reskins-angels](https://mods.factorio.com/mod/reskins-angels) v1.1.8
[reskins-bobs](https://mods.factorio.com/mod/reskins-bobs) v1.1.10
[reskins-compatibility](https://mods.factorio.com/mod/reskins-compatibility) v1.1.4
[reskins-library](https://mods.factorio.com/mod/reskins-library) v1.1.7
[rso-mod](https://mods.factorio.com/mod/rso-mod) v6.2.8
[stdlib](https://mods.factorio.com/mod/stdlib) v1.4.6
[textplates](https://mods.factorio.com/mod/textplates) v0.6.3
**Screenshots**
<img src="https://user-images.githubusercontent.com/8330294/124889504-7216ed80-dfd7-11eb-90bd-2023caf5a6b1.png" width="75%">
**Possible solution**
On event ***on_player_cursor_stack_changed*** there this code at line **109**
```lua
local player = game.get_player(event.player_index)
local opened_entity = player.opened
if not (opened_entity and opened_entity.valid and opened_entity.object_name ~= 'LuaEquipmentGrid' and
(opened_entity.type == "lab" or opened_entity.type == "mining-drill")) then return end
```
According to the [documentation](https://lua-api.factorio.com/latest/LuaControl.html#LuaControl.opened) i think that the easy way to fix this bug is to change the _if condition_ from ```opened_entity.object_name ~= 'LuaEquipmentGrid'``` to ```opened_entity.object_name == 'LuaEntity'``` because only **LuaEntity** has the key type (```opened_entity.type```)
|
process
|
angel s bio processing non recoverable error describe the bug this bug occours in a multiplayer game when you try to select an item from another player s inventory to reproduce game version modist screenshots possible solution on event on player cursor stack changed there this code at line lua local player game get player event player index local opened entity player opened if not opened entity and opened entity valid and opened entity object name luaequipmentgrid and opened entity type lab or opened entity type mining drill then return end according to the i think that the easy way to fix this bug is to change the if condition from opened entity object name luaequipmentgrid to opened entity object name luaentity because only luaentity has the key type opened entity type
| 1
|
92,990
| 10,764,430,745
|
IssuesEvent
|
2019-11-01 08:15:23
|
shihaoyap/ped
|
https://api.github.com/repos/shihaoyap/ped
|
opened
|
UG gave invalid command examples gave :
|
severity.High type.DocumentationBug
|
1) add –w /name Mary /phoneNo 87654321 /sex female /dateJoined 18/08/2019 /designation Autopsy Technician
[due to invalid phone number]
2) add –d /name John Doe /sex male /dob 12/12/1984 /dod 12/08/2019 2359 /doa 13/08/2019 0200 /status contactedNOK /nric S8456372C /religion Catholic /nameNOK Jack Smith /relationship Husband /phoneNOK 83462756 /cod Car Accident /details Heavy bleeding and head injury /organsForDonation NIL /fridgeId 2
[throws invalid command]
|
1.0
|
UG gave invalid command examples gave : - 1) add –w /name Mary /phoneNo 87654321 /sex female /dateJoined 18/08/2019 /designation Autopsy Technician
[due to invalid phone number]
2) add –d /name John Doe /sex male /dob 12/12/1984 /dod 12/08/2019 2359 /doa 13/08/2019 0200 /status contactedNOK /nric S8456372C /religion Catholic /nameNOK Jack Smith /relationship Husband /phoneNOK 83462756 /cod Car Accident /details Heavy bleeding and head injury /organsForDonation NIL /fridgeId 2
[throws invalid command]
|
non_process
|
ug gave invalid command examples gave add –w name mary phoneno sex female datejoined designation autopsy technician add –d name john doe sex male dob dod doa status contactednok nric religion catholic namenok jack smith relationship husband phonenok cod car accident details heavy bleeding and head injury organsfordonation nil fridgeid
| 0
|
16,862
| 22,142,947,997
|
IssuesEvent
|
2022-06-03 08:52:47
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Batch upload with zbctl does not initiate message subscriptions
|
kind/bug blocker/info team/process-automation
|
**Describe the bug**
If I do a batch upload with zbctl, and some of the processes have a start message, no message subscriptions are made.
If I do a single upload with zbctl, it works correctly
**To Reproduce**
Have a couple of bpmn's, at least one with a message start event.
Upload them with zbctl using a glob pattern:
`zbctl deploy *.bpmn`.
They are getting deployed successfully, but there are no message subscriptions for the message start events.
**Expected behavior**
There should be subscriptions for the message start events.
**Environment:**
- OS: Ubuntu 21.10
- Zeebe Version: 1.3.0 / zbctl 1.3.3
|
1.0
|
Batch upload with zbctl does not initiate message subscriptions - **Describe the bug**
If I do a batch upload with zbctl, and some of the processes have a start message, no message subscriptions are made.
If I do a single upload with zbctl, it works correctly
**To Reproduce**
Have a couple of bpmn's, at least one with a message start event.
Upload them with zbctl using a glob pattern:
`zbctl deploy *.bpmn`.
They are getting deployed successfully, but there are no message subscriptions for the message start events.
**Expected behavior**
There should be subscriptions for the message start events.
**Environment:**
- OS: Ubuntu 21.10
- Zeebe Version: 1.3.0 / zbctl 1.3.3
|
process
|
batch upload with zbctl does not initiate message subscriptions describe the bug if i do a batch upload with zbctl and some of the processes have a start message no message subscriptions are made if i do a single upload with zbctl it works correctly to reproduce have a couple of bpmn s at least one with a message start event upload them with zbctl using a glob pattern zbctl deploy bpmn they are getting deployed successfully but there are no message subscriptions for the message start events expected behavior there should be subscriptions for the message start events environment os ubuntu zeebe version zbctl
| 1
|
52,968
| 13,096,384,401
|
IssuesEvent
|
2020-08-03 15:35:45
|
openthread/openthread
|
https://api.github.com/repos/openthread/openthread
|
closed
|
dua_manager.cpp bad ifdefs around prefix and exit
|
Thread 1.2 bug comp: build
|
**Describe the bug** A clear and concise description of what the bug is.
Compile error with the following command line
**To Reproduce** Information to reproduce the behavior, including:
1. Git commit id
5276 (I think)
2. IEEE 802.15.4 hardware platform
nrf52840dk
3. Build steps
make -f examples/Makefile-nrf52840 clean
make -f examples/Makefile-nrf52840 COMMISSIONER=1 JOINER=1 COAP=1 DNS_CLIENT=1 MTD_NETDIAG=1 BORDER_ROUTER=1 MAC_FILTER=1 UDP_PROXY=1 BORDER_AGENT=1 SNTP_CLIENT=1 THREAD_VERSION=1.2 TIME_SYNC=1
4. Network topology
**Expected behavior** A clear and concise description of what you expected to happen.
Clean compile
**Console/log output** If applicable, add console/log output to help explain your problem.
CXX thread/libopenthread_ftd_a-network_data_leader_ftd.o
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp: In member function 'void ot::DuaManager::HandleDomainPrefixUpdate(ot::BackboneRouter::Leader::DomainPrefixState)':
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp:92:24: error: unused variable 'prefix' [-Werror=unused-variable]
92 | const Ip6::Prefix *prefix = nullptr;
| ^~~~~~
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp:150:1: error: label 'exit' defined but not used [-Werror=unused-label]
150 | exit:
| ^~~~
CXX thread/libopenthread_ftd_a-network_data_local.o
**Additional context** Add any other context about the problem here.
I think the fix is to move #endif from 148 to 151 and add
#if OPENTHREAD_CONFIG_DUA_ENABLE / #endif around line 92, but not sure what the code is doing, so I don't want to commit.
|
1.0
|
dua_manager.cpp bad ifdefs around prefix and exit - **Describe the bug** A clear and concise description of what the bug is.
Compile error with the following command line
**To Reproduce** Information to reproduce the behavior, including:
1. Git commit id
5276 (I think)
2. IEEE 802.15.4 hardware platform
nrf52840dk
3. Build steps
make -f examples/Makefile-nrf52840 clean
make -f examples/Makefile-nrf52840 COMMISSIONER=1 JOINER=1 COAP=1 DNS_CLIENT=1 MTD_NETDIAG=1 BORDER_ROUTER=1 MAC_FILTER=1 UDP_PROXY=1 BORDER_AGENT=1 SNTP_CLIENT=1 THREAD_VERSION=1.2 TIME_SYNC=1
4. Network topology
**Expected behavior** A clear and concise description of what you expected to happen.
Clean compile
**Console/log output** If applicable, add console/log output to help explain your problem.
CXX thread/libopenthread_ftd_a-network_data_leader_ftd.o
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp: In member function 'void ot::DuaManager::HandleDomainPrefixUpdate(ot::BackboneRouter::Leader::DomainPrefixState)':
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp:92:24: error: unused variable 'prefix' [-Werror=unused-variable]
92 | const Ip6::Prefix *prefix = nullptr;
| ^~~~~~
/Users/riedlse/Documents/GitHub/openthread/examples/../src/core/thread/dua_manager.cpp:150:1: error: label 'exit' defined but not used [-Werror=unused-label]
150 | exit:
| ^~~~
CXX thread/libopenthread_ftd_a-network_data_local.o
**Additional context** Add any other context about the problem here.
I think the fix is to move #endif from 148 to 151 and add
#if OPENTHREAD_CONFIG_DUA_ENABLE / #endif around line 92, but not sure what the code is doing, so I don't want to commit.
|
non_process
|
dua manager cpp bad ifdefs around prefix and exit describe the bug a clear and concise description of what the bug is compile error with the following command line to reproduce information to reproduce the behavior including git commit id i think ieee hardware platform build steps make f examples makefile clean make f examples makefile commissioner joiner coap dns client mtd netdiag border router mac filter udp proxy border agent sntp client thread version time sync network topology expected behavior a clear and concise description of what you expected to happen clean compile console log output if applicable add console log output to help explain your problem cxx thread libopenthread ftd a network data leader ftd o users riedlse documents github openthread examples src core thread dua manager cpp in member function void ot duamanager handledomainprefixupdate ot backbonerouter leader domainprefixstate users riedlse documents github openthread examples src core thread dua manager cpp error unused variable prefix const prefix prefix nullptr users riedlse documents github openthread examples src core thread dua manager cpp error label exit defined but not used exit cxx thread libopenthread ftd a network data local o additional context add any other context about the problem here i think the fix is to move endif from to and add if openthread config dua enable endif around line but not sure what the code is doing so i don t want to commit
| 0
|
9,141
| 12,203,188,239
|
IssuesEvent
|
2020-04-30 10:10:24
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
AUTOMATIC BATCH PROCESS - State Manager responds to messages in order to update file state
|
EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: TASK :rescue_worker_helmet:
|
### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
### Acceptance Criteria
**Customer acceptance criteria**
**Technical acceptance criteria**
The state manager exposes methods to receive updates and update file status in k:v store.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
M
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [x] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
AUTOMATIC BATCH PROCESS - State Manager responds to messages in order to update file state - ### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
### Acceptance Criteria
**Customer acceptance criteria**
**Technical acceptance criteria**
The state manager exposes methods to receive updates and update file status in k:v store.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
M
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [x] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
automatic batch process state manager responds to messages in order to update file state user want as a user i want to see up to date documents on the products website so i can make informed decisions acceptance criteria customer acceptance criteria technical acceptance criteria the state manager exposes methods to receive updates and update file status in k v store data acceptance criteria testing acceptance criteria size m value effort exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
2,039
| 4,847,513,716
|
IssuesEvent
|
2016-11-10 15:09:33
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
form attached to start event still displayed for previous process when swiching between processes
|
browser: all bug comp: activiti-processList
|
1. Start a process
2. Select a process that has a form attached to its start event
3. Select a process that does not have a form attached to its start event
**Expected results**
Form attached to start event for previous process is not displayed
**Actual results**
Form attached to start event for previous process is displayed
|
1.0
|
form attached to start event still displayed for previous process when swiching between processes - 1. Start a process
2. Select a process that has a form attached to its start event
3. Select a process that does not have a form attached to its start event
**Expected results**
Form attached to start event for previous process is not displayed
**Actual results**
Form attached to start event for previous process is displayed
|
process
|
form attached to start event still displayed for previous process when swiching between processes start a process select a process that has a form attached to its start event select a process that does not have a form attached to its start event expected results form attached to start event for previous process is not displayed actual results form attached to start event for previous process is displayed
| 1
|
283,012
| 24,513,292,518
|
IssuesEvent
|
2022-10-11 00:57:03
|
pulumi/pulumi-yaml
|
https://api.github.com/repos/pulumi/pulumi-yaml
|
closed
|
Add integration tests that fail type checking
|
kind/engineering area/testing
|
## Hello!
<!-- Please leave this section as-is, it's designed to help others in the community know how to interact with our GitHub issues. -->
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
## Issue details
We had a panic on all programs that didn't type check (#345), and this merged into master. We should catch such problems in automated tests.
<!-- Enhancement requests are most helpful when they describe the problem you're having as well as articulating the potential solution you'd like to see built. -->
### Affected area/feature
<!-- If you know the specific area where this feature request would go (e.g. Automation API, the Pulumi Service, the Terraform bridge, etc.), feel free to put that area here. -->
|
1.0
|
Add integration tests that fail type checking - ## Hello!
<!-- Please leave this section as-is, it's designed to help others in the community know how to interact with our GitHub issues. -->
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
## Issue details
We had a panic on all programs that didn't type check (#345), and this merged into master. We should catch such problems in automated tests.
<!-- Enhancement requests are most helpful when they describe the problem you're having as well as articulating the potential solution you'd like to see built. -->
### Affected area/feature
<!-- If you know the specific area where this feature request would go (e.g. Automation API, the Pulumi Service, the Terraform bridge, etc.), feel free to put that area here. -->
|
non_process
|
add integration tests that fail type checking hello vote on this issue by adding a 👍 reaction if you want to implement this feature comment to let us know we ll work with you on design scheduling etc issue details we had a panic on all programs that didn t type check and this merged into master we should catch such problems in automated tests affected area feature
| 0
|
65,446
| 8,809,431,351
|
IssuesEvent
|
2018-12-27 19:39:21
|
usds/us-forms-system
|
https://api.github.com/repos/usds/us-forms-system
|
closed
|
Update PRA wiki page to provide general guidance for anyone creating digital forms in the Federal govt
|
[practice] product [type] documentation [type] research
|
This is an amendment of an original issue (#78) created early in the project.
When creating online forms in the Federal govt, you must keep PRA in mind. PRA may or may not apply to your project. PRA will apply if you are materially changing the content of the form, such that you are collecting different information than already approved in the offline form (typically when you are creating *more* information).
The process of creating a *better* experience for users of online vs offline forms -- one of the purposes of this library -- may mean some questions are changed, removed, or even added, and therefore you must have a minmial working knowledge of PRA.
We should offer that to users of this library.
We have an existing wiki page called "PRA Implications" (https://github.com/usds/us-forms-system/wiki/PRA-Implications). I recommend changing the name of this page to something a bit more friendly, and also editing the content to ensure we are addressing the following:
* what is PRA and why do you need to know about it when creating online versions of offline forms
* where to go for specific questions regarding PRA in geneal
* where to go for super specific questions about your own form(s)
* examples of when PRA does and does not apply when creating online versions of offline forms
|
1.0
|
Update PRA wiki page to provide general guidance for anyone creating digital forms in the Federal govt - This is an amendment of an original issue (#78) created early in the project.
When creating online forms in the Federal govt, you must keep PRA in mind. PRA may or may not apply to your project. PRA will apply if you are materially changing the content of the form, such that you are collecting different information than already approved in the offline form (typically when you are creating *more* information).
The process of creating a *better* experience for users of online vs offline forms -- one of the purposes of this library -- may mean some questions are changed, removed, or even added, and therefore you must have a minmial working knowledge of PRA.
We should offer that to users of this library.
We have an existing wiki page called "PRA Implications" (https://github.com/usds/us-forms-system/wiki/PRA-Implications). I recommend changing the name of this page to something a bit more friendly, and also editing the content to ensure we are addressing the following:
* what is PRA and why do you need to know about it when creating online versions of offline forms
* where to go for specific questions regarding PRA in geneal
* where to go for super specific questions about your own form(s)
* examples of when PRA does and does not apply when creating online versions of offline forms
|
non_process
|
update pra wiki page to provide general guidance for anyone creating digital forms in the federal govt this is an amendment of an original issue created early in the project when creating online forms in the federal govt you must keep pra in mind pra may or may not apply to your project pra will apply if you are materially changing the content of the form such that you are collecting different information than already approved in the offline form typically when you are creating more information the process of creating a better experience for users of online vs offline forms one of the purposes of this library may mean some questions are changed removed or even added and therefore you must have a minmial working knowledge of pra we should offer that to users of this library we have an existing wiki page called pra implications i recommend changing the name of this page to something a bit more friendly and also editing the content to ensure we are addressing the following what is pra and why do you need to know about it when creating online versions of offline forms where to go for specific questions regarding pra in geneal where to go for super specific questions about your own form s examples of when pra does and does not apply when creating online versions of offline forms
| 0
|
33,279
| 12,198,440,354
|
IssuesEvent
|
2020-04-29 22:52:42
|
MicrosoftDocs/microsoft-365-docs
|
https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs
|
closed
|
Microsoft Cloud App Security
|
security
|
Please note MCAS is only part of M365 E5.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: bab2c963-204e-0e1d-8b1e-5909f7ef4d7d
* Version Independent ID: 3229ebe7-858c-76e2-40c6-c98387f4a685
* Content: [Top 12 tasks for security teams to support working from home](https://docs.microsoft.com/en-us/microsoft-365/security/top-security-tasks-for-remote-work?view=o365-worldwide#feedback)
* Content Source: [microsoft-365/security/top-security-tasks-for-remote-work.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/top-security-tasks-for-remote-work.md)
* Service: **o365-seccomp**
* GitHub Login: @BrendaCarter
* Microsoft Alias: **bcarter**
|
True
|
Microsoft Cloud App Security - Please note MCAS is only part of M365 E5.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: bab2c963-204e-0e1d-8b1e-5909f7ef4d7d
* Version Independent ID: 3229ebe7-858c-76e2-40c6-c98387f4a685
* Content: [Top 12 tasks for security teams to support working from home](https://docs.microsoft.com/en-us/microsoft-365/security/top-security-tasks-for-remote-work?view=o365-worldwide#feedback)
* Content Source: [microsoft-365/security/top-security-tasks-for-remote-work.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/top-security-tasks-for-remote-work.md)
* Service: **o365-seccomp**
* GitHub Login: @BrendaCarter
* Microsoft Alias: **bcarter**
|
non_process
|
microsoft cloud app security please note mcas is only part of document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service seccomp github login brendacarter microsoft alias bcarter
| 0
|
127,713
| 5,038,685,809
|
IssuesEvent
|
2016-12-18 11:58:24
|
lyuich/brain
|
https://api.github.com/repos/lyuich/brain
|
opened
|
Ansibleでアプリケーションをデプロイ出来るようにする
|
ansible category: engineer infra priority: normal status: new type: feature
|
### Write a summary of this issue
- Ansibleでアプリケーションをデプロイ出来るようにする
### List the details of what to do as a list
- [ ] デプロイの手順をPlaybookにまとめる
- [ ] ansible-playbookコマンドで実行する
### Any references (Describe URL of documents or webpages)
- #52
|
1.0
|
Ansibleでアプリケーションをデプロイ出来るようにする - ### Write a summary of this issue
- Ansibleでアプリケーションをデプロイ出来るようにする
### List the details of what to do as a list
- [ ] デプロイの手順をPlaybookにまとめる
- [ ] ansible-playbookコマンドで実行する
### Any references (Describe URL of documents or webpages)
- #52
|
non_process
|
ansibleでアプリケーションをデプロイ出来るようにする write a summary of this issue ansibleでアプリケーションをデプロイ出来るようにする list the details of what to do as a list デプロイの手順をplaybookにまとめる ansible playbookコマンドで実行する any references describe url of documents or webpages
| 0
|
15,375
| 19,561,558,128
|
IssuesEvent
|
2022-01-03 16:54:05
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Studio CLI: `prisma studio` errors with `ENOENT: no such file or directory, open '/[...]/prisma/schema.prisma'` after opening when using a custom name for the Prisma schema file
|
bug/1-repro-available kind/bug process/candidate topic: prisma-client tech/typescript team/client topic: schema file
|
### Bug description
Related errors https://github.com/prisma/studio/issues/819 and https://github.com/prisma/prisma/issues/10936
errors with
```
ENOENT: no such file or directory, open '/[...]/repro/prisma/schema.prisma'
```
Even though the file name is `schema1.prisma`
### How to reproduce
- Rename a prisma schema to `schema1.prisma`
- run `npx prisma studio --schema prisma/schema1.prisma`
### Expected behavior
Passing --schema=prisma/schema1.prisma to provide a filename different than "schema.prisma" should work.
### Prisma information
```
datasource db {
provider = "postgres"
url = "[...]"
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id
}
```
### Environment & setup
OS: Mac OS
Database: PostgreSQL
Node.js version: 14
### Prisma Version
```
prisma : 3.7.0
@prisma/client : 3.7.0
Current platform : darwin
Query Engine (Node-API) : libquery-engine 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine : migration-engine-cli 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : 8746e055198f517658c08a0c426c7eec87f5a85f
Studio : 0.445.0
```
|
1.0
|
Studio CLI: `prisma studio` errors with `ENOENT: no such file or directory, open '/[...]/prisma/schema.prisma'` after opening when using a custom name for the Prisma schema file - ### Bug description
Related errors https://github.com/prisma/studio/issues/819 and https://github.com/prisma/prisma/issues/10936
errors with
```
ENOENT: no such file or directory, open '/[...]/repro/prisma/schema.prisma'
```
Even though the file name is `schema1.prisma`
### How to reproduce
- Rename a prisma schema to `schema1.prisma`
- run `npx prisma studio --schema prisma/schema1.prisma`
### Expected behavior
Passing --schema=prisma/schema1.prisma to provide a filename different than "schema.prisma" should work.
### Prisma information
```
datasource db {
provider = "postgres"
url = "[...]"
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id
}
```
### Environment & setup
OS: Mac OS
Database: PostgreSQL
Node.js version: 14
### Prisma Version
```
prisma : 3.7.0
@prisma/client : 3.7.0
Current platform : darwin
Query Engine (Node-API) : libquery-engine 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine : migration-engine-cli 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt 8746e055198f517658c08a0c426c7eec87f5a85f (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : 8746e055198f517658c08a0c426c7eec87f5a85f
Studio : 0.445.0
```
|
process
|
studio cli prisma studio errors with enoent no such file or directory open prisma schema prisma after opening when using a custom name for the prisma schema file bug description related errors and errors with enoent no such file or directory open repro prisma schema prisma even though the file name is prisma how to reproduce rename a prisma schema to prisma run npx prisma studio schema prisma prisma expected behavior passing schema prisma prisma to provide a filename different than schema prisma should work prisma information datasource db provider postgres url generator client provider prisma client js model user id int id environment setup os mac os database postgresql node js version prisma version prisma prisma client current platform darwin query engine node api libquery engine at node modules prisma engines libquery engine darwin dylib node migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin default engines hash studio
| 1
|
386,904
| 26,706,268,035
|
IssuesEvent
|
2023-01-27 18:27:38
|
john-amiscaray/Stir
|
https://api.github.com/repos/john-amiscaray/Stir
|
closed
|
Update README
|
documentation
|
- ~~Update the description of the product at the beginning of the README to something more appealing and straightforward.~~
- ~~Make it clear that the first example you show can be shortened with the newer features (element descriptors, templating). Might turn some people off seeing the verbose builder stuff going on~~
- ~~Just hype it up more at the beginning of the README~~
|
1.0
|
Update README - - ~~Update the description of the product at the beginning of the README to something more appealing and straightforward.~~
- ~~Make it clear that the first example you show can be shortened with the newer features (element descriptors, templating). Might turn some people off seeing the verbose builder stuff going on~~
- ~~Just hype it up more at the beginning of the README~~
|
non_process
|
update readme update the description of the product at the beginning of the readme to something more appealing and straightforward make it clear that the first example you show can be shortened with the newer features element descriptors templating might turn some people off seeing the verbose builder stuff going on just hype it up more at the beginning of the readme
| 0
|
9,428
| 12,418,502,547
|
IssuesEvent
|
2020-05-23 00:37:48
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
sct_process_segmentation producing incorrect line connections in QC figures
|
bug priority:HIGH sct_process_segmentation
|
Hi, thanks for reporting an issue, please take some time to consider the applicable guidelines:
https://github.com/neuropoly/spinalcordtoolbox/blob/master/CONTRIBUTING.rst#reporting-a-bug-or-requesting-a-feature
(about how to suitably describe issues or requests) prior to deleting this blurb when about to submit the issue.
### Description

Something seems wrong in the qc figures when I run program as follow:
sct_process_segmentation -i t2_seg.nii.gz -vert 3:7 -perslice 1 -o t2_csa_slice.csv -qc "/home/zmz/test
<Description of the issue>
### Steps to Reproduce
0. Install SCT <release or git revision and platform, you can use the
first lines of output of sct_check_dependencies>
1. <First Step>
2. <Second Step>
3. <and so on...>
**Expected behavior:** <What you expect to happen>
**Actual behavior:** <What actually happens>
|
1.0
|
sct_process_segmentation producing incorrect line connections in QC figures - Hi, thanks for reporting an issue, please take some time to consider the applicable guidelines:
https://github.com/neuropoly/spinalcordtoolbox/blob/master/CONTRIBUTING.rst#reporting-a-bug-or-requesting-a-feature
(about how to suitably describe issues or requests) prior to deleting this blurb when about to submit the issue.
### Description

Something seems wrong in the qc figures when I run program as follow:
sct_process_segmentation -i t2_seg.nii.gz -vert 3:7 -perslice 1 -o t2_csa_slice.csv -qc "/home/zmz/test
<Description of the issue>
### Steps to Reproduce
0. Install SCT <release or git revision and platform, you can use the
first lines of output of sct_check_dependencies>
1. <First Step>
2. <Second Step>
3. <and so on...>
**Expected behavior:** <What you expect to happen>
**Actual behavior:** <What actually happens>
|
process
|
sct process segmentation producing incorrect line connections in qc figures hi thanks for reporting an issue please take some time to consider the applicable guidelines about how to suitably describe issues or requests prior to deleting this blurb when about to submit the issue description something seems wrong in the qc figures when i run program as follow sct process segmentation i seg nii gz vert perslice o csa slice csv qc home zmz test steps to reproduce install sct release or git revision and platform you can use the first lines of output of sct check dependencies expected behavior actual behavior
| 1
|
83,694
| 3,640,832,526
|
IssuesEvent
|
2016-02-13 05:29:40
|
afollestad/polar-dashboard
|
https://api.github.com/repos/afollestad/polar-dashboard
|
opened
|
Issues when a lot of tabs are enabled in landscape
|
bug feature high priority
|
Tabs should go to the second line in portrait *if* there are a certain amount of tabs.
|
1.0
|
Issues when a lot of tabs are enabled in landscape - Tabs should go to the second line in portrait *if* there are a certain amount of tabs.
|
non_process
|
issues when a lot of tabs are enabled in landscape tabs should go to the second line in portrait if there are a certain amount of tabs
| 0
|
104,814
| 13,124,829,012
|
IssuesEvent
|
2020-08-06 05:02:05
|
ZcashFoundation/zebra
|
https://api.github.com/repos/ZcashFoundation/zebra
|
closed
|
Restrict zebrad commands that are actually available on release build
|
C-design Poll::Ready
|
We can:
* delete some commands, or
* move some commands to the utils crate

|
1.0
|
Restrict zebrad commands that are actually available on release build - We can:
* delete some commands, or
* move some commands to the utils crate

|
non_process
|
restrict zebrad commands that are actually available on release build we can delete some commands or move some commands to the utils crate
| 0
|
13,706
| 16,464,257,451
|
IssuesEvent
|
2021-05-22 04:31:02
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
closed
|
Methods of Signal Decomposition (blind source separation)
|
inactive 👻 signal processing :chart_with_upwards_trend:
|
## Multichannel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] ICA
- [ ] PCA
See [sklearn.decomposition](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition).
## Single-channel
### EMD
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
### ICA-based
No Python implementation to my knowledge.
- [ ] Single-channel ICA (SCICA): [Davies, M. E., & James, C. J. (2007). Source separation using single channel ICA. Signal Processing, 87(8), 1819-1832.](https://www.sciencedirect.com/science/article/abs/pii/S0165168407000151)
- [ ] Ma's Method: [Ma, H. G., Jiang, Q. B., Liu, Z. Q., Liu, G., & Ma, Z. Y. (2010). A novel blind source separation method for single-channel signal. Signal processing, 90(12), 3232-3241.](https://www.sciencedirect.com/science/article/abs/pii/S0165168410002318)
- [ ] Lu's Method: [Lu, G., Xiao, M., Wei, P., & Zhang, H. (2015). A new method of blind source separation using single-channel ICA based on higher-order statistics. Mathematical Problems in Engineering, 2015.](https://www.hindawi.com/journals/mpe/2015/439264/)
### Other
- [ ] Singular spectrum analysis (SSA)-based signal separation method [(Del Pozo, 2015)](https://link.springer.com/chapter/10.1007/978-3-662-48324-4_3)
- [ ] Hilbert–Huang transform (HHT)-based signal separation method
|
1.0
|
Methods of Signal Decomposition (blind source separation) - ## Multichannel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] ICA
- [ ] PCA
See [sklearn.decomposition](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition).
## Single-channel
### EMD
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
### ICA-based
No Python implementation to my knowledge.
- [ ] Single-channel ICA (SCICA): [Davies, M. E., & James, C. J. (2007). Source separation using single channel ICA. Signal Processing, 87(8), 1819-1832.](https://www.sciencedirect.com/science/article/abs/pii/S0165168407000151)
- [ ] Ma's Method: [Ma, H. G., Jiang, Q. B., Liu, Z. Q., Liu, G., & Ma, Z. Y. (2010). A novel blind source separation method for single-channel signal. Signal processing, 90(12), 3232-3241.](https://www.sciencedirect.com/science/article/abs/pii/S0165168410002318)
- [ ] Lu's Method: [Lu, G., Xiao, M., Wei, P., & Zhang, H. (2015). A new method of blind source separation using single-channel ICA based on higher-order statistics. Mathematical Problems in Engineering, 2015.](https://www.hindawi.com/journals/mpe/2015/439264/)
### Other
- [ ] Singular spectrum analysis (SSA)-based signal separation method [(Del Pozo, 2015)](https://link.springer.com/chapter/10.1007/978-3-662-48324-4_3)
- [ ] Hilbert–Huang transform (HHT)-based signal separation method
|
process
|
methods of signal decomposition blind source separation multichannel empirical mode decomposition emd based on ica pca see single channel emd empirical mode decomposition emd based on ica based no python implementation to my knowledge single channel ica scica ma s method lu s method other singular spectrum analysis ssa based signal separation method hilbert–huang transform hht based signal separation method
| 1
|
122
| 2,552,603,215
|
IssuesEvent
|
2015-02-02 18:12:18
|
GsDevKit/gsApplicationTools
|
https://api.github.com/repos/GsDevKit/gsApplicationTools
|
opened
|
Need to be able to specify alternate location for gemstone.secret file in start/stop/restart scripts
|
in process
|
In internal testing I have a case where I need to run seaside tests as a different user than the default and editting default gemstone.secret file is not an option ... related to https://github.com/GsDevKit/gsDevKitHome/issues/17
Presumable an env varablei `GEMSTONE_ETCDIR` will fill the bill until we see what @krono comes up with for Issue #24
|
1.0
|
Need to be able to specify alternate location for gemstone.secret file in start/stop/restart scripts - In internal testing I have a case where I need to run seaside tests as a different user than the default and editting default gemstone.secret file is not an option ... related to https://github.com/GsDevKit/gsDevKitHome/issues/17
Presumable an env varablei `GEMSTONE_ETCDIR` will fill the bill until we see what @krono comes up with for Issue #24
|
process
|
need to be able to specify alternate location for gemstone secret file in start stop restart scripts in internal testing i have a case where i need to run seaside tests as a different user than the default and editting default gemstone secret file is not an option related to presumable an env varablei gemstone etcdir will fill the bill until we see what krono comes up with for issue
| 1
|
16,613
| 21,674,932,731
|
IssuesEvent
|
2022-05-08 15:03:26
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
opened
|
Remove spyOn
|
enhancement kmock-processor kmock-gradle
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
`spyOn` does not bring any value anymore due to the shared factory and may lead only to confusion and additional boilerplate.
Acceptance Criteria:
* Replace `spyOn` in the GradleExtension with a flag and propagate the change to the processor
|
1.0
|
Remove spyOn - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
`spyOn` does not bring any value anymore due to the shared factory and may lead only to confusion and additional boilerplate.
Acceptance Criteria:
* Replace `spyOn` in the GradleExtension with a flag and propagate the change to the processor
|
process
|
remove spyon description spyon does not bring any value anymore due to the shared factory and may lead only to confusion and additional boilerplate acceptance criteria replace spyon in the gradleextension with a flag and propagate the change to the processor
| 1
|
1,160
| 3,642,223,306
|
IssuesEvent
|
2016-02-14 05:58:02
|
BOWiki/BOW
|
https://api.github.com/repos/BOWiki/BOW
|
closed
|
Install pre-commit hook on repo
|
enhancement process system
|
Adding a pre-commit hook to the repo would ensure that all tests passed before submitting code to the system. As time passed, the number of checks would be expanded in order to have a cleaner codebase.
The first step is to install the [pre-commit](https://github.com/jish/pre-commit) gem in this repo
|
1.0
|
Install pre-commit hook on repo - Adding a pre-commit hook to the repo would ensure that all tests passed before submitting code to the system. As time passed, the number of checks would be expanded in order to have a cleaner codebase.
The first step is to install the [pre-commit](https://github.com/jish/pre-commit) gem in this repo
|
process
|
install pre commit hook on repo adding a pre commit hook to the repo would ensure that all tests passed before submitting code to the system as time passed the number of checks would be expanded in order to have a cleaner codebase the first step is to install the gem in this repo
| 1
|
19,795
| 26,178,265,257
|
IssuesEvent
|
2023-01-02 12:31:51
|
mdsreq-fga-unb/2022.2-Receitalista
|
https://api.github.com/repos/mdsreq-fga-unb/2022.2-Receitalista
|
closed
|
Adicionar Construção no Processo de Desenvolvimento
|
processo visao
|
Deve-se adicionar o tópico de Construção na parte do Processo de Desenvolvimento de Software com as respectivas atividades dele no documento Visão.
|
1.0
|
Adicionar Construção no Processo de Desenvolvimento - Deve-se adicionar o tópico de Construção na parte do Processo de Desenvolvimento de Software com as respectivas atividades dele no documento Visão.
|
process
|
adicionar construção no processo de desenvolvimento deve se adicionar o tópico de construção na parte do processo de desenvolvimento de software com as respectivas atividades dele no documento visão
| 1
|
21,552
| 29,866,198,872
|
IssuesEvent
|
2023-06-20 04:14:29
|
u4gbot/status.webodm.net
|
https://api.github.com/repos/u4gbot/status.webodm.net
|
closed
|
🛑 Processing Network (spark1) is down
|
status processing-network-spark1
|
In [`3377e52`](https://github.com/u4gbot/status.webodm.net/commit/3377e520e89b39ad7a3d32fead3ec908b51361c8
), Processing Network (spark1) (https://spark1.webodm.net) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Processing Network (spark1) is down - In [`3377e52`](https://github.com/u4gbot/status.webodm.net/commit/3377e520e89b39ad7a3d32fead3ec908b51361c8
), Processing Network (spark1) (https://spark1.webodm.net) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
process
|
🛑 processing network is down in processing network was down http code response time ms
| 1
|
2,259
| 5,092,909,916
|
IssuesEvent
|
2017-01-03 00:51:18
|
tunnckoCore/ideas
|
https://api.github.com/repos/tunnckoCore/ideas
|
opened
|
sata - Rolldown + Buble + Start - bundler, compiler, scaffolder and etc
|
in process todo
|
More of a workflow.
- Rolldown - layer on top of Rollup for better config experience and allows presets
- Buble - batteries-included ESNext compiler, great combo with Rollup
- Start - dead simple tasks runner based on Promises + FP and allows presets
- JSTransformer - for templates and loading different resources - plugin for Rollup
- charlike - dead simple and fast streaming scaffolder
|
1.0
|
sata - Rolldown + Buble + Start - bundler, compiler, scaffolder and etc - More of a workflow.
- Rolldown - layer on top of Rollup for better config experience and allows presets
- Buble - batteries-included ESNext compiler, great combo with Rollup
- Start - dead simple tasks runner based on Promises + FP and allows presets
- JSTransformer - for templates and loading different resources - plugin for Rollup
- charlike - dead simple and fast streaming scaffolder
|
process
|
sata rolldown buble start bundler compiler scaffolder and etc more of a workflow rolldown layer on top of rollup for better config experience and allows presets buble batteries included esnext compiler great combo with rollup start dead simple tasks runner based on promises fp and allows presets jstransformer for templates and loading different resources plugin for rollup charlike dead simple and fast streaming scaffolder
| 1
|
124,292
| 4,894,480,859
|
IssuesEvent
|
2016-11-19 09:34:10
|
dhis2/maintenance-app
|
https://api.github.com/repos/dhis2/maintenance-app
|
closed
|
Saving attributes with optionSet requires a valueType to be selected first
|
bug priority:medium
|
> We are using DHIS 2.25 versions. Every option set has a value type and it is a mandatory field.
> When we create an attribute and assign an option set to it (without setting any value type for the attribute), the value type of option set is considered as the value type of the attribute. Also, looks like updating the value type of the attribute is not possible as it's greyed out once we assign an option set. when we try to save this attribute, we get an error saying - "Missing Required property - ValueType". Please find the screenshot below depicting the issue.
>
> However, what works is - we first assign a value type to the attribute (which is the same as the value type of option set), then assign the option set to the attribute. Then we're able to save the attribute.
|
1.0
|
Saving attributes with optionSet requires a valueType to be selected first - > We are using DHIS 2.25 versions. Every option set has a value type and it is a mandatory field.
> When we create an attribute and assign an option set to it (without setting any value type for the attribute), the value type of option set is considered as the value type of the attribute. Also, looks like updating the value type of the attribute is not possible as it's greyed out once we assign an option set. when we try to save this attribute, we get an error saying - "Missing Required property - ValueType". Please find the screenshot below depicting the issue.
>
> However, what works is - we first assign a value type to the attribute (which is the same as the value type of option set), then assign the option set to the attribute. Then we're able to save the attribute.
|
non_process
|
saving attributes with optionset requires a valuetype to be selected first we are using dhis versions every option set has a value type and it is a mandatory field when we create an attribute and assign an option set to it without setting any value type for the attribute the value type of option set is considered as the value type of the attribute also looks like updating the value type of the attribute is not possible as it s greyed out once we assign an option set when we try to save this attribute we get an error saying missing required property valuetype please find the screenshot below depicting the issue however what works is we first assign a value type to the attribute which is the same as the value type of option set then assign the option set to the attribute then we re able to save the attribute
| 0
|
15,700
| 10,337,762,233
|
IssuesEvent
|
2019-09-03 15:28:19
|
cityofaustin/transportation.austintexas.io
|
https://api.github.com/repos/cityofaustin/transportation.austintexas.io
|
closed
|
Sort project index alphabetically
|
Service: Dev Type: Enhancement Workgroup: DTS
|
https://transportation.austintexas.io/about/
Sort projects alphabetically instead of by descending issue number
|
1.0
|
Sort project index alphabetically - https://transportation.austintexas.io/about/
Sort projects alphabetically instead of by descending issue number
|
non_process
|
sort project index alphabetically sort projects alphabetically instead of by descending issue number
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.