Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,259 | 7,189,079,314 | IssuesEvent | 2018-02-02 12:41:22 | Great-Hill-Corporation/quickBlocks | https://api.github.com/repos/Great-Hill-Corporation/quickBlocks | closed | Geth tracing does not work like Parity, therefore monitors (which depend on tracing) don't work. | libs-etherlib status-inprocess type-bug | We will have to read Geth traces and massage them so they look exactly like Parity traces, but I've pushed this out to release 0.7.0. | 1.0 | Geth tracing does not work like Parity, therefore monitors (which depend on tracing) don't work. - We will have to read Geth traces and massage them so they look exactly like Parity traces, but I've pushed this out to release 0.7.0. | process | geth tracing does not work like parity therefore monitors which depend on tracing don t work we will have to read geth traces and massage them so they look exactly like parity traces but i ve pushed this out to release | 1 |
114,295 | 4,628,223,719 | IssuesEvent | 2016-09-28 03:01:12 | newfs/gobotany-app | https://api.github.com/repos/newfs/gobotany-app | closed | Images on species page not enlargable on Android | bug Priority B | This went unnoticed until recently. A user visiting a species page using an Android device, isn't able to expand any of the plant images at the top of the page. The thumbnails don't respond to touch. | 1.0 | Images on species page not enlargable on Android - This went unnoticed until recently. A user visiting a species page using an Android device, isn't able to expand any of the plant images at the top of the page. The thumbnails don't respond to touch. | non_process | images on species page not enlargable on android this went unnoticed until recently a user visiting a species page using an android device isn t able to expand any of the plant images at the top of the page the thumbnails don t respond to touch | 0 |
10,570 | 13,382,736,672 | IssuesEvent | 2020-09-02 09:15:31 | prisma/migrate | https://api.github.com/repos/prisma/migrate | closed | Command: migrate save Error: No such table: mySchema._Migration | bug/1-repro-available kind/regression process/candidate | <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
Prisma Version: 2.6.0
PostgreSQL: 12.x
Start from scratch (Prisma Migrate)
after set my Model, just run the command: prisma migrate save --experimental
SET the DEBUG=*, The full Error Output:
```
MigrateEngine:stderr Sep 02 14:07:35.904 INFO migration_engine: Starting migration engine RPC server git_hash="650b5d0348ec38ae61e1e7db69bb54808418ede4" +0ms
MigrateEngine:stderr Sep 02 14:07:36.131 INFO quaint::single: Starting a postgresql connection. +223ms
MigrateEngine:rpc {
MigrateEngine:rpc jsonrpc: '2.0',
MigrateEngine:rpc error: {
MigrateEngine:rpc code: 4466,
MigrateEngine:rpc message: 'An error happened. Check the data field for details.',
MigrateEngine:rpc data: {
MigrateEngine:rpc is_panic: false,
MigrateEngine:rpc message: 'Failure during a migration command: Connector error. (error: Error querying the database: No such table: ercm._Migration\n' +
MigrateEngine:rpc ' 0: migration_core::api::ListMigrations\n' +
MigrateEngine:rpc ' at migration-engine/core/src/api.rs:118)',
MigrateEngine:rpc backtrace: null
MigrateEngine:rpc }
MigrateEngine:rpc },
MigrateEngine:rpc id: 1
MigrateEngine:rpc } +380ms
Error: Error: Failure during a migration command: Connector error. (error: Error querying the database: No such table: ercm._Migration
0: migration_core::api::ListMigrations
at migration-engine/core/src/api.rs:118)
```
## How to reproduce
Just from a sample .e.g [Start from scratch (Prisma Migrate)](https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch-prisma-migrate-typescript-postgres)
run command: prisma migrate save --experimental
## Expected behavior
I encountered such a situation when I upgraded to 2.6 today. The previous version did not happen
| 1.0 | Command: migrate save Error: No such table: mySchema._Migration - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
Prisma Version: 2.6.0
PostgreSQL: 12.x
Start from scratch (Prisma Migrate)
after set my Model, just run the command: prisma migrate save --experimental
SET the DEBUG=*, The full Error Output:
```
MigrateEngine:stderr Sep 02 14:07:35.904 INFO migration_engine: Starting migration engine RPC server git_hash="650b5d0348ec38ae61e1e7db69bb54808418ede4" +0ms
MigrateEngine:stderr Sep 02 14:07:36.131 INFO quaint::single: Starting a postgresql connection. +223ms
MigrateEngine:rpc {
MigrateEngine:rpc jsonrpc: '2.0',
MigrateEngine:rpc error: {
MigrateEngine:rpc code: 4466,
MigrateEngine:rpc message: 'An error happened. Check the data field for details.',
MigrateEngine:rpc data: {
MigrateEngine:rpc is_panic: false,
MigrateEngine:rpc message: 'Failure during a migration command: Connector error. (error: Error querying the database: No such table: ercm._Migration\n' +
MigrateEngine:rpc ' 0: migration_core::api::ListMigrations\n' +
MigrateEngine:rpc ' at migration-engine/core/src/api.rs:118)',
MigrateEngine:rpc backtrace: null
MigrateEngine:rpc }
MigrateEngine:rpc },
MigrateEngine:rpc id: 1
MigrateEngine:rpc } +380ms
Error: Error: Failure during a migration command: Connector error. (error: Error querying the database: No such table: ercm._Migration
0: migration_core::api::ListMigrations
at migration-engine/core/src/api.rs:118)
```
## How to reproduce
Just from a sample .e.g [Start from scratch (Prisma Migrate)](https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch-prisma-migrate-typescript-postgres)
run command: prisma migrate save --experimental
## Expected behavior
I encountered such a situation when I upgraded to 2.6 today. The previous version did not happen
| process | command migrate save error no such table myschema migration thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description prisma version postgresql x start from scratch prisma migrate after set my model just run the command prisma migrate save experimental set the debug the full error output migrateengine stderr sep info migration engine starting migration engine rpc server git hash migrateengine stderr sep info quaint single starting a postgresql connection migrateengine rpc migrateengine rpc jsonrpc migrateengine rpc error migrateengine rpc code migrateengine rpc message an error happened check the data field for details migrateengine rpc data migrateengine rpc is panic false migrateengine rpc message failure during a migration command connector error error error querying the database no such table ercm migration n migrateengine rpc migration core api listmigrations n migrateengine rpc at migration engine core src api rs migrateengine rpc backtrace null migrateengine rpc migrateengine rpc migrateengine rpc id migrateengine rpc error error failure during a migration command connector error error error querying the database no such table ercm migration migration core api listmigrations at migration engine core src api rs how to reproduce just from a sample e g run command prisma migrate save experimental expected behavior i encountered such a situation when i upgraded to today the previous version did not happen | 1 |
132,125 | 10,731,718,941 | IssuesEvent | 2019-10-28 20:11:27 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | CI isn't adding test results to PR's | area/testing | I see the tests running in Jenkins, but it isn't being added to the PR.
It's as if `minikube_set_pending.sh` isn't being run.
/cc @medyagh @sharifelgamal | 1.0 | CI isn't adding test results to PR's - I see the tests running in Jenkins, but it isn't being added to the PR.
It's as if `minikube_set_pending.sh` isn't being run.
/cc @medyagh @sharifelgamal | non_process | ci isn t adding test results to pr s i see the tests running in jenkins but it isn t being added to the pr it s as if minikube set pending sh isn t being run cc medyagh sharifelgamal | 0 |
19,009 | 25,010,038,103 | IssuesEvent | 2022-11-03 14:39:30 | MicrosoftDocs/windows-dev-docs | https://api.github.com/repos/MicrosoftDocs/windows-dev-docs | closed | [Windows 11] `ms-settings:privacy-backgroundapps` do nothing and users need to edit manually the registry in some cases | uwp/prod processes-and-threading/tech Pri1 | Hello,
Since Windows 11, this URI does nothing: `ms-settings:privacy-backgroundapps`. Is it expected ? If yes, the documentation page does not indicate the depreciation.
No matter the answer, this is an issue for Windows 11 users because the global Background Apps setting page is no more available on Windows 11 and the Advanced Settings of each application don't always include the Background Apps/tasks options. In Windows 10, the users can disable globally the Background Apps and when the migration to Windows 11 occurs - the global settings page is gone so there is no more possibility to reactivate the Background Apps. When the option is globally disabled, the advanced settings page of each application don't display the Backgroud Tasks settings...
I can understand the deprecation of the global page and associated option, but the Windows 11 setup/migration should reactivate the thing automatically to prevent user confusion. Today, I need to indicates to my users to open the registry and remove the key `GlobalUserDisabled` under `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\BackgroundAccessApplications` *and* to restart.
This is may good for advanced users but not for all. And this is not a good user experience at all :-)
According to received feedbacks from my users, this mainly occurs when users migrate to Windows 11 from Windows 10 but maybe a clean installation has this issue as well in some cases.
I create this issue here because the documentation is not up-to-date but maybe the Windows Team is also required to fix all aspect of this issue in Windows itself. I created this Feedback Hub issue too: https://aka.ms/AAhozet
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 987ec16c-9456-93a4-177a-dbd563be7eb7
* Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460
* Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app#privacy)
* Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | 1.0 | [Windows 11] `ms-settings:privacy-backgroundapps` do nothing and users need to edit manually the registry in some cases - Hello,
Since Windows 11, this URI does nothing: `ms-settings:privacy-backgroundapps`. Is it expected ? If yes, the documentation page does not indicate the depreciation.
No matter the answer, this is an issue for Windows 11 users because the global Background Apps setting page is no more available on Windows 11 and the Advanced Settings of each application don't always include the Background Apps/tasks options. In Windows 10, the users can disable globally the Background Apps and when the migration to Windows 11 occurs - the global settings page is gone so there is no more possibility to reactivate the Background Apps. When the option is globally disabled, the advanced settings page of each application don't display the Backgroud Tasks settings...
I can understand the deprecation of the global page and associated option, but the Windows 11 setup/migration should reactivate the thing automatically to prevent user confusion. Today, I need to indicates to my users to open the registry and remove the key `GlobalUserDisabled` under `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\BackgroundAccessApplications` *and* to restart.
This is may good for advanced users but not for all. And this is not a good user experience at all :-)
According to received feedbacks from my users, this mainly occurs when users migrate to Windows 11 from Windows 10 but maybe a clean installation has this issue as well in some cases.
I create this issue here because the documentation is not up-to-date but maybe the Windows Team is also required to fix all aspect of this issue in Windows itself. I created this Feedback Hub issue too: https://aka.ms/AAhozet
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 987ec16c-9456-93a4-177a-dbd563be7eb7
* Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460
* Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app#privacy)
* Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | process | ms settings privacy backgroundapps do nothing and users need to edit manually the registry in some cases hello since windows this uri does nothing ms settings privacy backgroundapps is it expected if yes the documentation page does not indicate the depreciation no matter the answer this is an issue for windows users because the global background apps setting page is no more available on windows and the advanced settings of each application don t always include the background apps tasks options in windows the users can disable globally the background apps and when the migration to windows occurs the global settings page is gone so there is no more possibility to reactivate the background apps when the option is globally disabled the advanced settings page of each application don t display the backgroud tasks settings i can understand the deprecation of the global page and associated option but the windows setup migration should reactivate the thing automatically to prevent user confusion today i need to indicates to my users to open the registry and remove the key globaluserdisabled under hkey current user software microsoft windows currentversion backgroundaccessapplications and to restart this is may good for advanced users but not for all and this is not a good user experience at all according to received feedbacks from my users this mainly occurs when users migrate to windows from windows but maybe a clean installation has this issue as well in some cases i create this issue here because the documentation is not up to date but maybe the windows team is also required to fix all aspect of this issue in windows itself i created this feedback hub issue too document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft | 1 |
576,783 | 17,094,640,473 | IssuesEvent | 2021-07-08 23:14:36 | CHOMPStation2/CHOMPStation2 | https://api.github.com/repos/CHOMPStation2/CHOMPStation2 | closed | Mapping support needed for the change to conveyor belts | High Priority Map Edit | #### Brief description of the issue
Due to https://github.com/CHOMPStation2/CHOMPStation2/pull/2441, conveyor belts set to diagonal directions are pointing in the wrong directions, commonly problematic at the mining base that relies on it to process materials.
#### What you expected to happen
For conveyor belts to work before the PR merge.
#### What actually happened
Conveyor belts are pointing in the wrong direction if they were set to diagonals.
#### Steps to reproduce
- Step 1 - Go to Mining on Sif
- Step 2 - Check conveyor belts
- Step 3 - Laugh/Cry
#### Code Revision
Server revision: B:-Using TGS- D:-Using TGS-
Commit: 7438d0f486dcc60bafc39dd0e8266db5b55194fd
TGS version: 4.11.1
DMAPI version: 5.3.0
#### Anything else you may wish to add:
- This generally affects all conveyor belts mapped in, just blocks mining from their job.
| 1.0 | Mapping support needed for the change to conveyor belts - #### Brief description of the issue
Due to https://github.com/CHOMPStation2/CHOMPStation2/pull/2441, conveyor belts set to diagonal directions are pointing in the wrong directions, commonly problematic at the mining base that relies on it to process materials.
#### What you expected to happen
For conveyor belts to work before the PR merge.
#### What actually happened
Conveyor belts are pointing in the wrong direction if they were set to diagonals.
#### Steps to reproduce
- Step 1 - Go to Mining on Sif
- Step 2 - Check conveyor belts
- Step 3 - Laugh/Cry
#### Code Revision
Server revision: B:-Using TGS- D:-Using TGS-
Commit: 7438d0f486dcc60bafc39dd0e8266db5b55194fd
TGS version: 4.11.1
DMAPI version: 5.3.0
#### Anything else you may wish to add:
- This generally affects all conveyor belts mapped in, just blocks mining from their job.
| non_process | mapping support needed for the change to conveyor belts brief description of the issue due to conveyor belts set to diagonal directions are pointing in the wrong directions commonly problematic at the mining base that relies on it to process materials what you expected to happen for conveyor belts to work before the pr merge what actually happened conveyor belts are pointing in the wrong direction if they were set to diagonals steps to reproduce step go to mining on sif step check conveyor belts step laugh cry code revision server revision b using tgs d using tgs commit tgs version dmapi version anything else you may wish to add this generally affects all conveyor belts mapped in just blocks mining from their job | 0 |
124,347 | 16,603,028,447 | IssuesEvent | 2021-06-01 22:25:50 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | closed | [Typescript] References to methods/properties of class are not recognized inside a function which is bound to the class using .bind(this). | Design Limitation | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.40
- OS Version: Windows 10
Steps to Reproduce:
1. Create a typescript file with following content.
```typescript
class Test {
constructor() {
this.logMessage('Initialized'); // Shows up in 'find all references' results
Promise.resolve('Testing...')
.then(function(msg) {
this.logMessage(msg); // Doesn't show up in 'find all references' results
}.bind(this));
}
private logMessage(msg: string) {
console.log(msg);
}
}
```
2. Right click on `logMessage` method definition and select 'find all references'. It should recognize all the references to this method. However, reference inside the callback passed to then is not recognized.
PS. I always use arrow functions in callbacks, but I now have to work with a code base which has a lot of callbacks with bind(this) written by previous developers.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| 1.0 | [Typescript] References to methods/properties of class are not recognized inside a function which is bound to the class using .bind(this). - <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.40
- OS Version: Windows 10
Steps to Reproduce:
1. Create a typescript file with following content.
```typescript
class Test {
constructor() {
this.logMessage('Initialized'); // Shows up in 'find all references' results
Promise.resolve('Testing...')
.then(function(msg) {
this.logMessage(msg); // Doesn't show up in 'find all references' results
}.bind(this));
}
private logMessage(msg: string) {
console.log(msg);
}
}
```
2. Right click on `logMessage` method definition and select 'find all references'. It should recognize all the references to this method. However, reference inside the callback passed to then is not recognized.
PS. I always use arrow functions in callbacks, but I now have to work with a code base which has a lot of callbacks with bind(this) written by previous developers.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| non_process | references to methods properties of class are not recognized inside a function which is bound to the class using bind this report issue to prefill these vscode version os version windows steps to reproduce create a typescript file with following content typescript class test constructor this logmessage initialized shows up in find all references results promise resolve testing then function msg this logmessage msg doesn t show up in find all references results bind this private logmessage msg string console log msg right click on logmessage method definition and select find all references it should recognize all the references to this method however reference inside the callback passed to then is not recognized ps i always use arrow functions in callbacks but i now have to work with a code base which has a lot of callbacks with bind this written by previous developers does this issue occur when all extensions are disabled yes | 0 |
5,874 | 8,696,365,716 | IssuesEvent | 2018-12-04 17:17:20 | emacs-ess/ESS | https://api.github.com/repos/emacs-ess/ESS | closed | Note: Variable binding depth exceeds max-specpdl-size | literate process:eval | Hi all,
in ESS 16.10 "ess-eval-chunk" failed when a "space" followed the "=" at the beginning of a code chunk "<<>>= ":
generate-new-buffer: Variable binding depth exceeds max-specpdl-size
or
preview-clearout: Variable binding depth exceeds max-specpdl-size
Interestingly, marking the code and calling "ess-eval-region" works as expected. Deleting the space "<<>>=" and calling again "ess-eval-chunk" works.
Maybe this is of interest, Sven | 1.0 | Note: Variable binding depth exceeds max-specpdl-size - Hi all,
in ESS 16.10 "ess-eval-chunk" failed when a "space" followed the "=" at the beginning of a code chunk "<<>>= ":
generate-new-buffer: Variable binding depth exceeds max-specpdl-size
or
preview-clearout: Variable binding depth exceeds max-specpdl-size
Interestingly, marking the code and calling "ess-eval-region" works as expected. Deleting the space "<<>>=" and calling again "ess-eval-chunk" works.
Maybe this is of interest, Sven | process | note variable binding depth exceeds max specpdl size hi all in ess ess eval chunk failed when a space followed the at the beginning of a code chunk generate new buffer variable binding depth exceeds max specpdl size or preview clearout variable binding depth exceeds max specpdl size interestingly marking the code and calling ess eval region works as expected deleting the space and calling again ess eval chunk works maybe this is of interest sven | 1 |
215,408 | 7,293,961,284 | IssuesEvent | 2018-02-25 19:14:27 | gyrocode/jquery-datatables-checkboxes | https://api.github.com/repos/gyrocode/jquery-datatables-checkboxes | reopened | Can't use user-select event for selectAll checkbox | Priority: High Type: Bug | Hello there,
There is a problem when trying to block selecting a row using Datatables 'user-select' event
I made a [jsFiddle](https://jsfiddle.net/xqvwpc68/1/).
Try to click on Angelica Ramos.
After that, click on select All checkbox from header, and.. Angelica is selected.
I tried to put a disabled checkbox for Angelica, but same result ([here](https://jsfiddle.net/xqvwpc68/2/))
Otherwise great work.
Thanks | 1.0 | Can't use user-select event for selectAll checkbox - Hello there,
There is a problem when trying to block selecting a row using Datatables 'user-select' event
I made a [jsFiddle](https://jsfiddle.net/xqvwpc68/1/).
Try to click on Angelica Ramos.
After that, click on select All checkbox from header, and.. Angelica is selected.
I tried to put a disabled checkbox for Angelica, but same result ([here](https://jsfiddle.net/xqvwpc68/2/))
Otherwise great work.
Thanks | non_process | can t use user select event for selectall checkbox hello there there is a problem when trying to block selecting a row using datatables user select event i made a try to click on angelica ramos after that click on select all checkbox from header and angelica is selected i tried to put a disabled checkbox for angelica but same result otherwise great work thanks | 0 |
21,668 | 30,111,447,795 | IssuesEvent | 2023-06-30 08:06:15 | nephio-project/sig-release | https://api.github.com/repos/nephio-project/sig-release | closed | Define scheme for release versioning and release Cycle | area/process-mgmt sig/release | Define the scheme for release versioning and cycles and document them. We need to define
- Release Cadence ( How often we need to make a release? )
- What is the k8s version to test and support
- Length of support for a release
- Versioning scheme (x.y.z)
- Hot fixes/back porting mechanism | 1.0 | Define scheme for release versioning and release Cycle - Define the scheme for release versioning and cycles and document them. We need to define
- Release Cadence ( How often we need to make a release? )
- What is the k8s version to test and support
- Length of support for a release
- Versioning scheme (x.y.z)
- Hot fixes/back porting mechanism | process | define scheme for release versioning and release cycle define the scheme for release versioning and cycles and document them we need to define release cadence how often we need to make a release what is the version to test and support length of support for a release versioning scheme x y z hot fixes back porting mechanism | 1 |
240,189 | 26,254,332,250 | IssuesEvent | 2023-01-05 22:33:25 | samq-wsdemo/apache-roller | https://api.github.com/repos/samq-wsdemo/apache-roller | opened | CVE-2021-41184 (Medium) detected in jquery-ui-1.12.1.jar | security vulnerability | ## CVE-2021-41184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.1.jar</b></p></summary>
<p>WebJar for jQuery UI</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: /app/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/webjars/jquery-ui/1.12.1/jquery-ui-1.12.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of the `of` option of the `.position()` util from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. Any string value passed to the `of` option is now treated as a CSS selector. A workaround is to not accept the value of the `of` option from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41184>CVE-2021-41184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: 1.13.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-41184 (Medium) detected in jquery-ui-1.12.1.jar - ## CVE-2021-41184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.1.jar</b></p></summary>
<p>WebJar for jQuery UI</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: /app/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/webjars/jquery-ui/1.12.1/jquery-ui-1.12.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of the `of` option of the `.position()` util from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. Any string value passed to the `of` option is now treated as a CSS selector. A workaround is to not accept the value of the `of` option from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41184>CVE-2021-41184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: 1.13.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_process | cve medium detected in jquery ui jar cve medium severity vulnerability vulnerable library jquery ui jar webjar for jquery ui library home page a href path to dependency file app pom xml path to vulnerable library canner repository org webjars jquery ui jquery ui jar dependency hierarchy x jquery ui jar vulnerable library found in base branch master vulnerability details jquery ui is the official jquery user interface library prior to version accepting the value of the of option of the position util from untrusted sources may execute untrusted code the issue is fixed in jquery ui any string value passed to the of option is now treated as a css selector a workaround is to not accept the value of the of option from untrusted sources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue | 0 |
10,116 | 13,044,162,220 | IssuesEvent | 2020-07-29 03:47:30 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `YearWeekWithoutMode` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `YearWeekWithoutMode` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `YearWeekWithoutMode` from TiDB -
## Description
Port the scalar function `YearWeekWithoutMode` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function yearweekwithoutmode from tidb description port the scalar function yearweekwithoutmode from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
13,008 | 15,366,373,323 | IssuesEvent | 2021-03-02 01:10:52 | MineCake147E/Shamisen | https://api.github.com/repos/MineCake147E/Shamisen | opened | Unify the base infrastructure of `IAudioSource<TSample, TFormat>` and `IDataSource<TSample>`, and some improvements on foundation | Feature: Output 🔊 Feature: Signal Processing 🎛️ Feature: Utility 🧰 Kind: Enhancement 📈 Priority: Highest 🚀 Status: Working ▶️ | ## Motivation
While the two is closing each other, it's time for unifying them.
## Tasks
- [ ] Add the interfaces below:
- [ ] `IReadSupport<TSample>? ReadSupport {get;}`
- [ ] `IAsyncReadSupport<TSample>? AsyncReadSupport {get;}`
- [ ] Add the properties below in `IDataSource<TSample>`
- [ ] `ulong? Length { get; }`
- [ ] `ulong? TotalLength { get; }`
- [ ] `ulong? Position { get; }` instead of current `ulong Position { get; }`
- [ ] `ISkipSupport? SkipSupport { get; }`
- [ ] `ISeekSupport? SeekSupport { get; }`
- [ ] Replace `ReadResult Read(Span<TSample> destination)` with `IReadSupport<TSample>? ReadSupport {get;}` in `IDataSource<TSample>`
- [ ] Replace `ValueTask<ReadResult> ReadAsync(Memory<TSample> destination)` with `IAsyncReadSupport<TSample>? AsyncReadSupport {get;}` in `IDataSource<TSample>`
- [ ] Make `IAudioSource<TSample, TFormat>` based on `IDataSource<TSample>`
- [ ] Make `IReadableAudioSource<TSample, TFormat>` implement `IReadSupport<TSample>`
- [ ] Make `IAsyncReadableAudioSource<TSample, TFormat>` implement `IAsyncReadSupport<TSample>` | 1.0 | Unify the base infrastructure of `IAudioSource<TSample, TFormat>` and `IDataSource<TSample>`, and some improvements on foundation - ## Motivation
While the two is closing each other, it's time for unifying them.
## Tasks
- [ ] Add the interfaces below:
- [ ] `IReadSupport<TSample>? ReadSupport {get;}`
- [ ] `IAsyncReadSupport<TSample>? AsyncReadSupport {get;}`
- [ ] Add the properties below in `IDataSource<TSample>`
- [ ] `ulong? Length { get; }`
- [ ] `ulong? TotalLength { get; }`
- [ ] `ulong? Position { get; }` instead of current `ulong Position { get; }`
- [ ] `ISkipSupport? SkipSupport { get; }`
- [ ] `ISeekSupport? SeekSupport { get; }`
- [ ] Replace `ReadResult Read(Span<TSample> destination)` with `IReadSupport<TSample>? ReadSupport {get;}` in `IDataSource<TSample>`
- [ ] Replace `ValueTask<ReadResult> ReadAsync(Memory<TSample> destination)` with `IAsyncReadSupport<TSample>? AsyncReadSupport {get;}` in `IDataSource<TSample>`
- [ ] Make `IAudioSource<TSample, TFormat>` based on `IDataSource<TSample>`
- [ ] Make `IReadableAudioSource<TSample, TFormat>` implement `IReadSupport<TSample>`
- [ ] Make `IAsyncReadableAudioSource<TSample, TFormat>` implement `IAsyncReadSupport<TSample>` | process | unify the base infrastructure of iaudiosource and idatasource and some improvements on foundation motivation while the two is closing each other it s time for unifying them tasks add the interfaces below ireadsupport readsupport get iasyncreadsupport asyncreadsupport get add the properties below in idatasource ulong length get ulong totallength get ulong position get instead of current ulong position get iskipsupport skipsupport get iseeksupport seeksupport get replace readresult read span destination with ireadsupport readsupport get in idatasource replace valuetask readasync memory destination with iasyncreadsupport asyncreadsupport get in idatasource make iaudiosource based on idatasource make ireadableaudiosource implement ireadsupport make iasyncreadableaudiosource implement iasyncreadsupport | 1 |
2,404 | 5,193,150,432 | IssuesEvent | 2017-01-22 16:31:48 | raphym/Simulation_of_message_routing_by_intelligent_agents | https://api.github.com/repos/raphym/Simulation_of_message_routing_by_intelligent_agents | opened | ScanHotspot change | being processed | I have to change the function scanHotspot to be generic with input vector and the result into the output vector | 1.0 | ScanHotspot change - I have to change the function scanHotspot to be generic with input vector and the result into the output vector | process | scanhotspot change i have to change the function scanhotspot to be generic with input vector and the result into the output vector | 1 |
4,303 | 4,289,170,407 | IssuesEvent | 2016-07-17 22:54:45 | translate/pootle | https://api.github.com/repos/translate/pootle | closed | Give sprites a version hash | important performance ui | currently the sprite.png is always loaded from the same place. This makes cache-forever impossible. It would be possible if we give the sprite a unique hash that changes when the sprite has changed. | True | Give sprites a version hash - currently the sprite.png is always loaded from the same place. This makes cache-forever impossible. It would be possible if we give the sprite a unique hash that changes when the sprite has changed. | non_process | give sprites a version hash currently the sprite png is always loaded from the same place this makes cache forever impossible it would be possible if we give the sprite a unique hash that changes when the sprite has changed | 0 |
7,361 | 10,509,176,032 | IssuesEvent | 2019-09-27 10:19:40 | prisma/studio | https://api.github.com/repos/prisma/studio | opened | Implement standalone version | kind/feature process/candidate | It's very cumbersome to need a `prisma2 dev` command running in the CLI all the time.
The same as Visual Studio Code allows you to just do `code .` in the terminal, I would like the same for studio: `studio .`, which opens the Electron App. | 1.0 | Implement standalone version - It's very cumbersome to need a `prisma2 dev` command running in the CLI all the time.
The same as Visual Studio Code allows you to just do `code .` in the terminal, I would like the same for studio: `studio .`, which opens the Electron App. | process | implement standalone version it s very cumbersome to need a dev command running in the cli all the time the same as visual studio code allows you to just do code in the terminal i would like the same for studio studio which opens the electron app | 1 |
389,675 | 26,828,971,451 | IssuesEvent | 2023-02-02 14:47:59 | NetAppDocs/storagegrid-116 | https://api.github.com/repos/NetAppDocs/storagegrid-116 | closed | Additional Clarity On Steps | documentation | Page: [Investigate lost objects](https://docs.netapp.com/us-en/storagegrid-116/monitor/investigating-lost-objects.html)
Customer has requested additional clarity on steps to specify which node type should be accessed.
For the audit log messages a admin node and storage node for the ade search | 1.0 | Additional Clarity On Steps - Page: [Investigate lost objects](https://docs.netapp.com/us-en/storagegrid-116/monitor/investigating-lost-objects.html)
Customer has requested additional clarity on steps to specify which node type should be accessed.
For the audit log messages a admin node and storage node for the ade search | non_process | additional clarity on steps page customer has requested additional clarity on steps to specify which node type should be accessed for the audit log messages a admin node and storage node for the ade search | 0 |
2,125 | 4,965,267,263 | IssuesEvent | 2016-12-04 06:45:34 | symfony/symfony | https://api.github.com/repos/symfony/symfony | closed | Process setTty does not test for device existence | Bug Process Status: Reviewed | ```
[Symfony\Component\Debug\Exception\ContextErrorException]
Warning: proc_open(/dev/tty): failed to open stream: No such device or address
```
Caught this when I was running the builtin webserver in docker and have not enabled tty.
To replicate run `server:run -v` against 7cc6161
``` bash
cd project
PROJECT_ROOT=./ docker-compose up dev
```
``` yaml
# docker-compose.yml
version: '2'
services:
dev:
build: .
command: bash -c "php composer.phar install -o --prefer-dist && app/console -v server:run 0.0.0.0:8000"
ports:
- "8000:8000"
volumes:
- $PROJECT_ROOT:/code
# tty: true
```
```
# Dockerfile
FROM php:5.6-fpm
RUN apt-get update && apt-get install -y \
git libicu-dev zlib1g-dev libxslt-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install pdo_mysql intl zip xsl pcntl && \
pecl install apcu-4.0.11 && \
docker-php-ext-enable apcu
RUN echo "date.timezone = Asia/Manila" >> /usr/local/etc/php/php.ini
WORKDIR /code
```
| 1.0 | Process setTty does not test for device existence - ```
[Symfony\Component\Debug\Exception\ContextErrorException]
Warning: proc_open(/dev/tty): failed to open stream: No such device or address
```
Caught this when I was running the builtin webserver in docker and have not enabled tty.
To replicate run `server:run -v` against 7cc6161
``` bash
cd project
PROJECT_ROOT=./ docker-compose up dev
```
``` yaml
# docker-compose.yml
version: '2'
services:
dev:
build: .
command: bash -c "php composer.phar install -o --prefer-dist && app/console -v server:run 0.0.0.0:8000"
ports:
- "8000:8000"
volumes:
- $PROJECT_ROOT:/code
# tty: true
```
```
# Dockerfile
FROM php:5.6-fpm
RUN apt-get update && apt-get install -y \
git libicu-dev zlib1g-dev libxslt-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install pdo_mysql intl zip xsl pcntl && \
pecl install apcu-4.0.11 && \
docker-php-ext-enable apcu
RUN echo "date.timezone = Asia/Manila" >> /usr/local/etc/php/php.ini
WORKDIR /code
```
| process | process settty does not test for device existence warning proc open dev tty failed to open stream no such device or address caught this when i was running the builtin webserver in docker and have not enabled tty to replicate run server run v against bash cd project project root docker compose up dev yaml docker compose yml version services dev build command bash c php composer phar install o prefer dist app console v server run ports volumes project root code tty true dockerfile from php fpm run apt get update apt get install y git libicu dev dev libxslt dev no install recommends rm rf var lib apt lists run docker php ext install pdo mysql intl zip xsl pcntl pecl install apcu docker php ext enable apcu run echo date timezone asia manila usr local etc php php ini workdir code | 1 |
22,338 | 30,963,484,051 | IssuesEvent | 2023-08-08 06:43:35 | threefoldtech/vbuilders | https://api.github.com/repos/threefoldtech/vbuilders | closed | Hard dependency on Redis | type_bug process_wontfix | When running the builder for mycelium for example this panics:
```log
panic: Osal has hard dependency to redis!
``` | 1.0 | Hard dependency on Redis - When running the builder for mycelium for example this panics:
```log
panic: Osal has hard dependency to redis!
``` | process | hard dependency on redis when running the builder for mycelium for example this panics log panic osal has hard dependency to redis | 1 |
33,708 | 16,083,547,540 | IssuesEvent | 2021-04-26 08:31:43 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Refactor internal org.jooq.impl.Function to require Field rather than QueryPart parameters in constructor | C: Performance E: All Editions P: Medium R: Wontfix T: Enhancement | `Function` is used internally in a number of places, especially when invoking the equivalent of varargs functions in SQL. The constructors declare a `QueryPart...` vararg parameter, but it would be better if this could be changed to `Field...` so that the SQL rendering can use `visit(<arg>)` when rendering SQL rather than `visit(DSL.field("{0}", <arg>))`. This requires refactoring of `DSL#groupingSets()`. | True | Refactor internal org.jooq.impl.Function to require Field rather than QueryPart parameters in constructor - `Function` is used internally in a number of places, especially when invoking the equivalent of varargs functions in SQL. The constructors declare a `QueryPart...` vararg parameter, but it would be better if this could be changed to `Field...` so that the SQL rendering can use `visit(<arg>)` when rendering SQL rather than `visit(DSL.field("{0}", <arg>))`. This requires refactoring of `DSL#groupingSets()`. | non_process | refactor internal org jooq impl function to require field rather than querypart parameters in constructor function is used internally in a number of places especially when invoking the equivalent of varargs functions in sql the constructors declare a querypart vararg parameter but it would be better if this could be changed to field so that the sql rendering can use visit when rendering sql rather than visit dsl field this requires refactoring of dsl groupingsets | 0 |
4,892 | 2,565,159,074 | IssuesEvent | 2015-02-07 02:54:02 | twosigma/beaker-notebook | https://api.github.com/repos/twosigma/beaker-notebook | opened | double click on file tree should open | enhancement Priority 1 | if you single click on mix.bkr it copies that name to the text entry field.
add double click to also then open.

| 1.0 | double click on file tree should open - if you single click on mix.bkr it copies that name to the text entry field.
add double click to also then open.

| non_process | double click on file tree should open if you single click on mix bkr it copies that name to the text entry field add double click to also then open | 0 |
125,387 | 17,836,144,083 | IssuesEvent | 2021-09-03 01:32:50 | kapseliboi/sqlpad | https://api.github.com/repos/kapseliboi/sqlpad | opened | CVE-2021-37701 (High) detected in tar-4.4.13.tgz | security vulnerability | ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: sqlpad/server/package.json</p>
<p>Path to vulnerable library: sqlpad/server/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- odbc-2.3.6.tgz (Root Library)
- node-pre-gyp-0.14.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.16, 5.0.8, 6.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37701 (High) detected in tar-4.4.13.tgz - ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: sqlpad/server/package.json</p>
<p>Path to vulnerable library: sqlpad/server/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- odbc-2.3.6.tgz (Root Library)
- node-pre-gyp-0.14.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.16, 5.0.8, 6.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file sqlpad server package json path to vulnerable library sqlpad server node modules tar package json dependency hierarchy odbc tgz root library node pre gyp tgz x tar tgz vulnerable library found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource | 0 |
4,289 | 7,190,727,291 | IssuesEvent | 2018-02-02 18:18:50 | parcel-bundler/parcel | https://api.github.com/repos/parcel-bundler/parcel | closed | 🙋CSS-only asset compilation | #Feature CSS Preprocessing Good First Issue Help Wanted | <!--- Provide a general summary of the issue in the title above -->
I'm using parcel as the bundler for my `scss` files, which works great! (Massive kudos to all contributors, I'm **very** impressed with the speed and ease of use)
However, I was wondering whether it would be possible to use parcel as a CSS-bundler without having it produce `js`-files as well? The website I'm creating is **not** a SPA, which means I'd like to use the generated CSS without having to include it through JavaScript or a generated HTML file.
### 🎛 Configuration
Parcel version: `1.5.1`
CLI command: `parcel watch assets/style.scss -d public`
### 🤔 Expected Behavior
Running `parcel watch assets/style.scss -d public` could produce the following files:
```
public/style.css
public/style.map (Scss sourcemaps)
```
### 😯 Current Behavior
Running `parcel watch assets/style.scss -d public`produces the following files:
```
public/style.css
public/style.js
public/style.map (JS sourcemaps)
```
### 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.5.1 |
| Node | 8.1.3 |
| npm/Yarn | 1.3.2 |
| Operating System | macOS High Sierra |
| 1.0 | 🙋CSS-only asset compilation - <!--- Provide a general summary of the issue in the title above -->
I'm using parcel as the bundler for my `scss` files, which works great! (Massive kudos to all contributors, I'm **very** impressed with the speed and ease of use)
However, I was wondering whether it would be possible to use parcel as a CSS-bundler without having it produce `js`-files as well? The website I'm creating is **not** a SPA, which means I'd like to use the generated CSS without having to include it through JavaScript or a generated HTML file.
### 🎛 Configuration
Parcel version: `1.5.1`
CLI command: `parcel watch assets/style.scss -d public`
### 🤔 Expected Behavior
Running `parcel watch assets/style.scss -d public` could produce the following files:
```
public/style.css
public/style.map (Scss sourcemaps)
```
### 😯 Current Behavior
Running `parcel watch assets/style.scss -d public`produces the following files:
```
public/style.css
public/style.js
public/style.map (JS sourcemaps)
```
### 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.5.1 |
| Node | 8.1.3 |
| npm/Yarn | 1.3.2 |
| Operating System | macOS High Sierra |
| process | 🙋css only asset compilation i m using parcel as the bundler for my scss files which works great massive kudos to all contributors i m very impressed with the speed and ease of use however i was wondering whether it would be possible to use parcel as a css bundler without having it produce js files as well the website i m creating is not a spa which means i d like to use the generated css without having to include it through javascript or a generated html file 🎛 configuration parcel version cli command parcel watch assets style scss d public 🤔 expected behavior running parcel watch assets style scss d public could produce the following files public style css public style map scss sourcemaps 😯 current behavior running parcel watch assets style scss d public produces the following files public style css public style js public style map js sourcemaps 🌍 your environment software version s parcel node npm yarn operating system macos high sierra | 1 |
74,689 | 25,262,624,793 | IssuesEvent | 2022-11-16 00:32:42 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | linux6.0.7-rt-xanmod does not build ZFS module | Type: Defect | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Void Linux
Distribution Version | Rolling release
Kernel Version | 6.0.8
Architecture |x86_64
OpenZFS Version |2.1.6
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Xanmod rt versions do not compile OpenZFS module, non rt versions compile normally, the same problem occurs on Ubuntu.
### Describe how to reproduce the problem
In Void Linux you need to use this repository to create kernel:
https://notabug.org/Marcoapc/voidxanmodK
On Ubuntu you need to follow the steps from the official Xanmod website:
https://xanmod.org/
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
**Building DKMS module: zfs-2.1.6... FAILED!**
**Generating kernel module dependency lists... done.**
**Executing post-install kernel hook: 20-initramfs ...**
**grep: warning: stray \ before /**
**grep: warning: stray \ before /**
**dracut-install: Failed to find module 'zfs'**
**dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.b6n1va/initramfs --kerneldir /lib/modules/6.0.7-rt11-xanmod1_1/ -m zfs**
| 1.0 | linux6.0.7-rt-xanmod does not build ZFS module - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Void Linux
Distribution Version | Rolling release
Kernel Version | 6.0.8
Architecture |x86_64
OpenZFS Version |2.1.6
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Xanmod rt versions do not compile OpenZFS module, non rt versions compile normally, the same problem occurs on Ubuntu.
### Describe how to reproduce the problem
In Void Linux you need to use this repository to create kernel:
https://notabug.org/Marcoapc/voidxanmodK
On Ubuntu you need to follow the steps from the official Xanmod website:
https://xanmod.org/
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
**Building DKMS module: zfs-2.1.6... FAILED!**
**Generating kernel module dependency lists... done.**
**Executing post-install kernel hook: 20-initramfs ...**
**grep: warning: stray \ before /**
**grep: warning: stray \ before /**
**dracut-install: Failed to find module 'zfs'**
**dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.b6n1va/initramfs --kerneldir /lib/modules/6.0.7-rt11-xanmod1_1/ -m zfs**
| non_process | rt xanmod does not build zfs module thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name void linux distribution version rolling release kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing xanmod rt versions do not compile openzfs module non rt versions compile normally the same problem occurs on ubuntu describe how to reproduce the problem in void linux you need to use this repository to create kernel on ubuntu you need to follow the steps from the official xanmod website include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with building dkms module zfs failed generating kernel module dependency lists done executing post install kernel hook initramfs grep warning stray before grep warning stray before dracut install failed to find module zfs dracut failed usr lib dracut dracut install d var tmp dracut initramfs kerneldir lib modules m zfs | 0 |
155,481 | 5,956,141,695 | IssuesEvent | 2017-05-28 14:06:52 | siteorigin/siteorigin-panels | https://api.github.com/repos/siteorigin/siteorigin-panels | closed | Attributes: Custom CSS stopping responsive CSS from outputting correctly | bug priority-1 | Row level Custom CSS is stopping responsive grid CSS from outputting as required, thereby breaking responsive behaviour.
Reminder: @gregpriday we looked at this in Slack with a users JSON layout.
_Generating this code `@media (max-width: 768px){;font-size: 0.5em }`_ | 1.0 | Attributes: Custom CSS stopping responsive CSS from outputting correctly - Row level Custom CSS is stopping responsive grid CSS from outputting as required, thereby breaking responsive behaviour.
Reminder: @gregpriday we looked at this in Slack with a users JSON layout.
_Generating this code `@media (max-width: 768px){;font-size: 0.5em }`_ | non_process | attributes custom css stopping responsive css from outputting correctly row level custom css is stopping responsive grid css from outputting as required thereby breaking responsive behaviour reminder gregpriday we looked at this in slack with a users json layout generating this code media max width font size | 0 |
544,110 | 15,889,790,885 | IssuesEvent | 2021-04-10 13:00:28 | bigbluebutton/bigbluebutton | https://api.github.com/repos/bigbluebutton/bigbluebutton | closed | Ability to "raise hand" when calling into a BigBlueButton conference via PSTN | module: audio priority: low type: enhancement | Originally reported on Google Code with ID 1269
```
A community member asked if it was possible to have a "raise hand" type capability for
users only connected via PSTN. See
https://groups.google.com/group/bigbluebutton-users/browse_thread/thread/52f644bac8af3458
```
Reported by `ffdixon` on 2012-07-18 11:27:29
| 1.0 | Ability to "raise hand" when calling into a BigBlueButton conference via PSTN - Originally reported on Google Code with ID 1269
```
A community member asked if it was possible to have a "raise hand" type capability for
users only connected via PSTN. See
https://groups.google.com/group/bigbluebutton-users/browse_thread/thread/52f644bac8af3458
```
Reported by `ffdixon` on 2012-07-18 11:27:29
| non_process | ability to raise hand when calling into a bigbluebutton conference via pstn originally reported on google code with id a community member asked if it was possible to have a raise hand type capability for users only connected via pstn see reported by ffdixon on | 0 |
5,209 | 7,979,318,089 | IssuesEvent | 2018-07-17 21:16:41 | Great-Hill-Corporation/quickBlocks | https://api.github.com/repos/Great-Hill-Corporation/quickBlocks | closed | Testing for three-level cache when searching for Ethereum data | apps-all libs-etherlib status-inprocess tools-all type-enhancement type-question | Using the command `getBlock 1001001` only as an example (this would apply to all tools), the system should work like this:
```
if (the block is found in the quickBlocks cache)
return the block to the user
else if (the there is a locally running node)
retrieve the block from the node
store the block in quickBlocks' cache
return the block to the user
else if (a fallback server is configured -- i.e. infura)
retrieve the block from the node
store the block in quickBlocks' cache
return the block to the user
```
What this means is that, if a fallback node is configured (I'm debating if it should be configured by default), then the test cases should all work identically no matter where the data is being pulled from.
To test this we want to run the test cases with:
1. **No local node running and no configured fallback and no cache**
-- Should fail with a good message
-- What happens when the ~/.quickBlocks folder does not exist?
-- Does the ~/.quickBlocks folder get created?
2. **Local node >> not << running with a configured fallback and no cache (possible default)**
-- The first time it runs is will be slow
-- The ./quickBlocks folder and the cache should be created
-- The second time it runs it should be significantly faster
3. **Without removing the cache, run with a local node running**
-- No change in data or speed
-- Some test cases may speed up because no round trip to a remote (overloaded) server.
4. **Remove the cache, leave the local node running**
-- No change in data
-- First run slower, second run faster
-- Cache should be re-created
At this point, we have a locally running node and a local quickBlocks cache. This is the fastest mode and the optimal configuration. Now we
5. **Turn off the locally running node**
-- Should work without fail -- same data
-- Should pick up the data quickly from the cache
-- Some tests, that hit the node such as isContract, may be slower
6. **Remove cache (and disable it so it doesn't get re-created)**
-- Should run fine from the fallback node
I'm not sure how this would be tested, and I don't think we have to test it continually (that is we don't have to automate this testing).
| 1.0 | Testing for three-level cache when searching for Ethereum data - Using the command `getBlock 1001001` only as an example (this would apply to all tools), the system should work like this:
```
if (the block is found in the quickBlocks cache)
return the block to the user
else if (the there is a locally running node)
retrieve the block from the node
store the block in quickBlocks' cache
return the block to the user
else if (a fallback server is configured -- i.e. infura)
retrieve the block from the node
store the block in quickBlocks' cache
return the block to the user
```
What this means is that, if a fallback node is configured (I'm debating if it should be configured by default), then the test cases should all work identically no matter where the data is being pulled from.
To test this we want to run the test cases with:
1. **No local node running and no configured fallback and no cache**
-- Should fail with a good message
-- What happens when the ~/.quickBlocks folder does not exist?
-- Does the ~/.quickBlocks folder get created?
2. **Local node >> not << running with a configured fallback and no cache (possible default)**
-- The first time it runs is will be slow
-- The ./quickBlocks folder and the cache should be created
-- The second time it runs it should be significantly faster
3. **Without removing the cache, run with a local node running**
-- No change in data or speed
-- Some test cases may speed up because no round trip to a remote (overloaded) server.
4. **Remove the cache, leave the local node running**
-- No change in data
-- First run slower, second run faster
-- Cache should be re-created
At this point, we have a locally running node and a local quickBlocks cache. This is the fastest mode and the optimal configuration. Now we
5. **Turn off the locally running node**
-- Should work without fail -- same data
-- Should pick up the data quickly from the cache
-- Some tests, that hit the node such as isContract, may be slower
6. **Remove cache (and disable it so it doesn't get re-created)**
-- Should run fine from the fallback node
I'm not sure how this would be tested, and I don't think we have to test it continually (that is we don't have to automate this testing).
| process | testing for three level cache when searching for ethereum data using the command getblock only as an example this would apply to all tools the system should work like this if the block is found in the quickblocks cache return the block to the user else if the there is a locally running node retrieve the block from the node store the block in quickblocks cache return the block to the user else if a fallback server is configured i e infura retrieve the block from the node store the block in quickblocks cache return the block to the user what this means is that if a fallback node is configured i m debating if it should be configured by default then the test cases should all work identically no matter where the data is being pulled from to test this we want to run the test cases with no local node running and no configured fallback and no cache should fail with a good message what happens when the quickblocks folder does not exist does the quickblocks folder get created local node not running with a configured fallback and no cache possible default the first time it runs is will be slow the quickblocks folder and the cache should be created the second time it runs it should be significantly faster without removing the cache run with a local node running no change in data or speed some test cases may speed up because no round trip to a remote overloaded server remove the cache leave the local node running no change in data first run slower second run faster cache should be re created at this point we have a locally running node and a local quickblocks cache this is the fastest mode and the optimal configuration now we turn off the locally running node should work without fail same data should pick up the data quickly from the cache some tests that hit the node such as iscontract may be slower remove cache and disable it so it doesn t get re created should run fine from the fallback node i m not sure how this would be tested and i don t think we have to test it continually that is we don t have to automate this testing | 1 |
267,917 | 23,332,664,513 | IssuesEvent | 2022-08-09 07:11:52 | dynolab/networkts | https://api.github.com/repos/dynolab/networkts | closed | Test AR vs. VAR on toy model of a VAR(3) process with and without seasonality | test | - [x] Create toy model generator for VAR processes and put it into networkts/toy_models.py
- [x] Create test "AR vs. VAR" on VAR(3) process
- [x] Create test "AR vs. VAR" on VAR(3) process augmented with seasonality (e.g., add lag 20)
- [x] Put both tests into tests/ directory
For both tests, demonstrate in-sample and out-of-sample forecasts and demonstrate that a VAR forecast is better in any case | 1.0 | Test AR vs. VAR on toy model of a VAR(3) process with and without seasonality - - [x] Create toy model generator for VAR processes and put it into networkts/toy_models.py
- [x] Create test "AR vs. VAR" on VAR(3) process
- [x] Create test "AR vs. VAR" on VAR(3) process augmented with seasonality (e.g., add lag 20)
- [x] Put both tests into tests/ directory
For both tests, demonstrate in-sample and out-of-sample forecasts and demonstrate that a VAR forecast is better in any case | non_process | test ar vs var on toy model of a var process with and without seasonality create toy model generator for var processes and put it into networkts toy models py create test ar vs var on var process create test ar vs var on var process augmented with seasonality e g add lag put both tests into tests directory for both tests demonstrate in sample and out of sample forecasts and demonstrate that a var forecast is better in any case | 0 |
14,664 | 17,786,558,783 | IssuesEvent | 2021-08-31 11:48:49 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | I think this term should be modified or deleted GO:0085018 (maintenance of symbiont-containing vacuole by host) | obsoletion ready multi-species process | Please provide as much information as you can:
* **GO term ID and Label**
GO:0085018 (maintenance of symbiont-containing vacuole by host)
* **Reason for deprecation**
I am wagering that the parasite is hijacking these functions--I am convinced that this is not an "evolved" function for the benefit of maintaining the parasites in the comfort to which they have become acquainted.
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
* **"Consider" term(s) (ID and label)**
Suggestions for reannotation
* **Are there annotations to this term?**
- How many EXP: There are 14 annotations to this term: Aquaporin-1 from various mammalian species makes up 13 of these. The other is a protein SEY1 homolog from Dictyostelium discoideum. Researchers studied Legionella in them and found some host proteins that were necessary to maintain the pathogen-containing vacuole in the slime mold.
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* **Is this term in a subset? (check the AmiGO page for that term)**
* **Any other information**
| 1.0 | I think this term should be modified or deleted GO:0085018 (maintenance of symbiont-containing vacuole by host) - Please provide as much information as you can:
* **GO term ID and Label**
GO:0085018 (maintenance of symbiont-containing vacuole by host)
* **Reason for deprecation**
I am wagering that the parasite is hijacking these functions--I am convinced that this is not an "evolved" function for the benefit of maintaining the parasites in the comfort to which they have become acquainted.
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
* **"Consider" term(s) (ID and label)**
Suggestions for reannotation
* **Are there annotations to this term?**
- How many EXP: There are 14 annotations to this term: Aquaporin-1 from various mammalian species makes up 13 of these. The other is a protein SEY1 homolog from Dictyostelium discoideum. Researchers studied Legionella in them and found some host proteins that were necessary to maintain the pathogen-containing vacuole in the slime mold.
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* **Is this term in a subset? (check the AmiGO page for that term)**
* **Any other information**
| process | i think this term should be modified or deleted go maintenance of symbiont containing vacuole by host please provide as much information as you can go term id and label go maintenance of symbiont containing vacuole by host reason for deprecation i am wagering that the parasite is hijacking these functions i am convinced that this is not an evolved function for the benefit of maintaining the parasites in the comfort to which they have become acquainted replace by term id and label if all annotations can safely be moved to that term consider term s id and label suggestions for reannotation are there annotations to this term how many exp there are annotations to this term aquaporin from various mammalian species makes up of these the other is a protein homolog from dictyostelium discoideum researchers studied legionella in them and found some host proteins that were necessary to maintain the pathogen containing vacuole in the slime mold are there mappings and cross references to this term interpro keywords check quickgo cross references section is this term in a subset check the amigo page for that term any other information | 1 |
20,698 | 27,372,967,978 | IssuesEvent | 2023-02-28 02:00:08 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Mon, 27 Feb 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Revisiting Modality Imbalance In Multimodal Pedestrian Detection
- **Authors:** Arindam Das, Sudip Das, Ganesh Sistu, Jonathan Horgan, Ujjwal Bhattacharya, Edward Jones, Martin Glavin, Ciarán Eising
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.12589
- **Pdf link:** https://arxiv.org/pdf/2302.12589
- **Abstract**
Multimodal learning, particularly for pedestrian detection, has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low-light, night-time, and adverse weather conditions. However, in most cases, the training distribution largely emphasizes the contribution of one specific input that makes the network biased towards one modality. Hence, the generalization of such models becomes a significant problem where the non-dominant input modality during training could be contributing more to the course of inference. Here, we introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities. Specifically, our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training to extract the multimodal distribution which is referred to as removing the imbalance problem. Furthermore, our decoupling concept of output stream helps the detection task by sharing the spatial sensitive information mutually. Extensive experiments of the proposed method on KAIST and UTokyo datasets shows improvement of the respective state-of-the-art performance.
### MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images
- **Authors:** Mark Eastwood, Heba Sailem, Silviu Tudor, Xiaohong Gao, Judith Offman, Emmanouil Karteris, Angeles Montero Fernandez, Danny Jonigk, William Cookson, Miriam Moffatt, Sanjay Popat, Fayyaz Minhas, Jan Lukas Robertus
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.12653
- **Pdf link:** https://arxiv.org/pdf/2302.12653
- **Abstract**
Malignant mesothelioma is classified into three histological subtypes, Epithelioid, Sarcomatoid, and Biphasic according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Biphasic tumors display significant populations of both cell types. This subtyping is subjective and limited by current diagnostic guidelines and can differ even between expert thoracic pathologists when characterising the continuum of relative proportions of epithelioid and sarcomatoid components using a three class system. In this work, we develop a novel dual-task Graph Neural Network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score of all the cells in the sample. The proposed approach uses only core-level labels and frames the prediction task as a dual multiple instance learning (MIL) problem. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multi-centric test set from Mesobank, on which we demonstrate the predictive performance of our model. We validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score, finding that some of the morphological differences identified by our model match known differences used by pathologists. We further show that the model score is predictive of patient survival with a hazard ratio of 2.30. The code for the proposed approach, along with the dataset, is available at: https://github.com/measty/MesoGraph.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Effect of Lossy Compression Algorithms on Face Image Quality and Recognition
- **Authors:** Torsten Schlett, Sebastian Schachner, Christian Rathgeb, Juan Tapia, Christoph Busch
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.12593
- **Pdf link:** https://arxiv.org/pdf/2302.12593
- **Abstract**
Lossy face image compression can degrade the image quality and the utility for the purpose of face recognition. This work investigates the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models. The analysis is conducted over a range of specific image target sizes. Four compression types are considered, namely JPEG, JPEG 2000, downscaled PNG, and notably the new JPEG XL format. Frontal color images from the ColorFERET database were used in a Region Of Interest (ROI) variant and a portrait variant. We primarily conclude that JPEG XL allows for superior mean and worst case face recognition performance especially at lower target sizes, below approximately 5kB for the ROI variant, while there appears to be no critical advantage among the compression types at higher target sizes. Quality assessments from modern models correlate well overall with the compression effect on face recognition performance.
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
| 2.0 | New submissions for Mon, 27 Feb 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Revisiting Modality Imbalance In Multimodal Pedestrian Detection
- **Authors:** Arindam Das, Sudip Das, Ganesh Sistu, Jonathan Horgan, Ujjwal Bhattacharya, Edward Jones, Martin Glavin, Ciarán Eising
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.12589
- **Pdf link:** https://arxiv.org/pdf/2302.12589
- **Abstract**
Multimodal learning, particularly for pedestrian detection, has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low-light, night-time, and adverse weather conditions. However, in most cases, the training distribution largely emphasizes the contribution of one specific input that makes the network biased towards one modality. Hence, the generalization of such models becomes a significant problem where the non-dominant input modality during training could be contributing more to the course of inference. Here, we introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities. Specifically, our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training to extract the multimodal distribution which is referred to as removing the imbalance problem. Furthermore, our decoupling concept of output stream helps the detection task by sharing the spatial sensitive information mutually. Extensive experiments of the proposed method on KAIST and UTokyo datasets shows improvement of the respective state-of-the-art performance.
### MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images
- **Authors:** Mark Eastwood, Heba Sailem, Silviu Tudor, Xiaohong Gao, Judith Offman, Emmanouil Karteris, Angeles Montero Fernandez, Danny Jonigk, William Cookson, Miriam Moffatt, Sanjay Popat, Fayyaz Minhas, Jan Lukas Robertus
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.12653
- **Pdf link:** https://arxiv.org/pdf/2302.12653
- **Abstract**
Malignant mesothelioma is classified into three histological subtypes, Epithelioid, Sarcomatoid, and Biphasic according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Biphasic tumors display significant populations of both cell types. This subtyping is subjective and limited by current diagnostic guidelines and can differ even between expert thoracic pathologists when characterising the continuum of relative proportions of epithelioid and sarcomatoid components using a three class system. In this work, we develop a novel dual-task Graph Neural Network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score of all the cells in the sample. The proposed approach uses only core-level labels and frames the prediction task as a dual multiple instance learning (MIL) problem. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multi-centric test set from Mesobank, on which we demonstrate the predictive performance of our model. We validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score, finding that some of the morphological differences identified by our model match known differences used by pathologists. We further show that the model score is predictive of patient survival with a hazard ratio of 2.30. The code for the proposed approach, along with the dataset, is available at: https://github.com/measty/MesoGraph.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Effect of Lossy Compression Algorithms on Face Image Quality and Recognition
- **Authors:** Torsten Schlett, Sebastian Schachner, Christian Rathgeb, Juan Tapia, Christoph Busch
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.12593
- **Pdf link:** https://arxiv.org/pdf/2302.12593
- **Abstract**
Lossy face image compression can degrade the image quality and the utility for the purpose of face recognition. This work investigates the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models. The analysis is conducted over a range of specific image target sizes. Four compression types are considered, namely JPEG, JPEG 2000, downscaled PNG, and notably the new JPEG XL format. Frontal color images from the ColorFERET database were used in a Region Of Interest (ROI) variant and a portrait variant. We primarily conclude that JPEG XL allows for superior mean and worst case face recognition performance especially at lower target sizes, below approximately 5kB for the ROI variant, while there appears to be no critical advantage among the compression types at higher target sizes. Quality assessments from modern models correlate well overall with the compression effect on face recognition performance.
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
| process | new submissions for mon feb keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp revisiting modality imbalance in multimodal pedestrian detection authors arindam das sudip das ganesh sistu jonathan horgan ujjwal bhattacharya edward jones martin glavin ciarán eising subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract multimodal learning particularly for pedestrian detection has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low light night time and adverse weather conditions however in most cases the training distribution largely emphasizes the contribution of one specific input that makes the network biased towards one modality hence the generalization of such models becomes a significant problem where the non dominant input modality during training could be contributing more to the course of inference here we introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities specifically our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training to extract the multimodal distribution which is referred to as removing the imbalance problem furthermore our decoupling concept of output stream helps the detection task by sharing the spatial sensitive information mutually extensive experiments of the proposed method on kaist and utokyo datasets shows improvement of the respective state of the art performance mesograph automatic profiling of malignant mesothelioma subtypes from histological images authors mark eastwood heba sailem silviu tudor xiaohong gao judith offman emmanouil karteris angeles montero fernandez danny jonigk william cookson miriam moffatt sanjay popat fayyaz minhas jan lukas robertus subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract malignant mesothelioma is classified into three histological subtypes epithelioid sarcomatoid and biphasic according to the relative proportions of epithelioid and sarcomatoid tumor cells present biphasic tumors display significant populations of both cell types this subtyping is subjective and limited by current diagnostic guidelines and can differ even between expert thoracic pathologists when characterising the continuum of relative proportions of epithelioid and sarcomatoid components using a three class system in this work we develop a novel dual task graph neural network gnn architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution this allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score of all the cells in the sample the proposed approach uses only core level labels and frames the prediction task as a dual multiple instance learning mil problem tissue is represented by a cell graph with both cell level morphological and regional features we use an external multi centric test set from mesobank on which we demonstrate the predictive performance of our model we validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score finding that some of the morphological differences identified by our model match known differences used by pathologists we further show that the model score is predictive of patient survival with a hazard ratio of the code for the proposed approach along with the dataset is available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression effect of lossy compression algorithms on face image quality and recognition authors torsten schlett sebastian schachner christian rathgeb juan tapia christoph busch subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract lossy face image compression can degrade the image quality and the utility for the purpose of face recognition this work investigates the effect of lossy image compression on a state of the art face recognition model and on multiple face image quality assessment models the analysis is conducted over a range of specific image target sizes four compression types are considered namely jpeg jpeg downscaled png and notably the new jpeg xl format frontal color images from the colorferet database were used in a region of interest roi variant and a portrait variant we primarily conclude that jpeg xl allows for superior mean and worst case face recognition performance especially at lower target sizes below approximately for the roi variant while there appears to be no critical advantage among the compression types at higher target sizes quality assessments from modern models correlate well overall with the compression effect on face recognition performance keyword raw there is no result keyword raw image there is no result | 1 |
12,415 | 14,920,395,602 | IssuesEvent | 2021-01-23 04:30:17 | e4exp/paper_manager_abstract | https://api.github.com/repos/e4exp/paper_manager_abstract | opened | Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval | 2020 Natural Language Processing Question Answering | * https://arxiv.org/abs/2009.12756
* 2020
本研究では、複雑なオープンドメインの質問に答えるためのシンプルで効率的なマルチホップ密検索アプローチを提案し、HotpotQAとマルチエビデンスFEVERの2つのマルチホップデータセットで最先端の性能を達成した。
これまでの研究とは異なり、我々の手法は、文書間ハイパーリンクや人間の注釈付きエンティティマーカーなどのコーパス固有の情報へのアクセスを必要とせず、構造化されていないテキストコーパスにも適用可能である。
また、我々のシステムは、効率性と精度のトレードオフを大幅に改善し、HotpotQAで公表されている最高の精度と一致する一方で、推論時間が10倍速くなった。
| 1.0 | Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval - * https://arxiv.org/abs/2009.12756
* 2020
本研究では、複雑なオープンドメインの質問に答えるためのシンプルで効率的なマルチホップ密検索アプローチを提案し、HotpotQAとマルチエビデンスFEVERの2つのマルチホップデータセットで最先端の性能を達成した。
これまでの研究とは異なり、我々の手法は、文書間ハイパーリンクや人間の注釈付きエンティティマーカーなどのコーパス固有の情報へのアクセスを必要とせず、構造化されていないテキストコーパスにも適用可能である。
また、我々のシステムは、効率性と精度のトレードオフを大幅に改善し、HotpotQAで公表されている最高の精度と一致する一方で、推論時間が10倍速くなった。
| process | answering complex open domain questions with multi hop dense retrieval 本研究では、複雑なオープンドメインの質問に答えるためのシンプルで効率的なマルチホップ密検索アプローチを提案し、 。 これまでの研究とは異なり、我々の手法は、文書間ハイパーリンクや人間の注釈付きエンティティマーカーなどのコーパス固有の情報へのアクセスを必要とせず、構造化されていないテキストコーパスにも適用可能である。 また、我々のシステムは、効率性と精度のトレードオフを大幅に改善し、hotpotqaで公表されている最高の精度と一致する一方で、 。 | 1 |
17,397 | 23,213,782,369 | IssuesEvent | 2022-08-02 12:32:06 | googleapis/python-translate | https://api.github.com/repos/googleapis/python-translate | closed | Your .repo-metadata.json file has a problem 🤒 | type: process api: translate repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'translation' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'translation' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname translation invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
734,400 | 25,347,779,457 | IssuesEvent | 2022-11-19 12:04:55 | GEWIS/gewisweb | https://api.github.com/repos/GEWIS/gewisweb | closed | Improve error message when adding courses | Type: Enhancement Priority: Low For: Backend | As per the e-mail from Koen, the error message(s) when adding courses are not very descriptive.
We can probably do something with the form validation to have proper error messages appear (like in almost every other module). | 1.0 | Improve error message when adding courses - As per the e-mail from Koen, the error message(s) when adding courses are not very descriptive.
We can probably do something with the form validation to have proper error messages appear (like in almost every other module). | non_process | improve error message when adding courses as per the e mail from koen the error message s when adding courses are not very descriptive we can probably do something with the form validation to have proper error messages appear like in almost every other module | 0 |
3,980 | 6,910,790,937 | IssuesEvent | 2017-11-28 04:33:35 | uccser/verto | https://api.github.com/repos/uccser/verto | opened | Change file path for interactive thumbnails | Django processor implementation update | Currently the thumbnail file path is `interactive-name/thumbnail.png`, but this either needs altering slightly to become `interactives/interactive-name/thumbnail.png` or the Verto user needs to be able to specify the file path for interactive thumbnails themselves (similar to how they can specify their own html templates). | 1.0 | Change file path for interactive thumbnails - Currently the thumbnail file path is `interactive-name/thumbnail.png`, but this either needs altering slightly to become `interactives/interactive-name/thumbnail.png` or the Verto user needs to be able to specify the file path for interactive thumbnails themselves (similar to how they can specify their own html templates). | process | change file path for interactive thumbnails currently the thumbnail file path is interactive name thumbnail png but this either needs altering slightly to become interactives interactive name thumbnail png or the verto user needs to be able to specify the file path for interactive thumbnails themselves similar to how they can specify their own html templates | 1 |
449,187 | 12,964,655,274 | IssuesEvent | 2020-07-20 20:52:20 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.youtube.com - see bug description | browser-firefox-mobile engine-gecko priority-critical | <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55621 -->
**URL**: https://m.youtube.com/watch?v=2CCFq6x42O8
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: ads not disable
**Steps to Reproduce**:
Itu eu outro eu uti ir Ryu ir Ryu igreja Ryu ir Ryu ir eu ir eu ir Ryu
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200713203149</li><li>channel: beta</li><li>hasTouchScreen: true</li>
</ul>
</details>
Submitted in the name of `@cucu`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.youtube.com - see bug description - <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55621 -->
**URL**: https://m.youtube.com/watch?v=2CCFq6x42O8
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: ads not disable
**Steps to Reproduce**:
Itu eu outro eu uti ir Ryu ir Ryu igreja Ryu ir Ryu ir eu ir eu ir Ryu
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200713203149</li><li>channel: beta</li><li>hasTouchScreen: true</li>
</ul>
</details>
Submitted in the name of `@cucu`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | m youtube com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description ads not disable steps to reproduce itu eu outro eu uti ir ryu ir ryu igreja ryu ir ryu ir eu ir eu ir ryu browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true submitted in the name of cucu from with ❤️ | 0 |
74,759 | 3,447,689,147 | IssuesEvent | 2015-12-16 02:02:58 | minj/foxtrick | https://api.github.com/repos/minj/foxtrick | closed | Chrome notifications don't work in Opera Next | accepted blocking bug Platform-Opera Priority-Low starred | Original [issue 1248](https://code.google.com/p/foxtrick/issues/detail?id=1248) created by [minj](mailto:4mr.minj@gmail.com) on 2014-10-21T11:03:43.000Z:
https://dev.opera.com/blog/web-notifications-in-opera-developer-24/ | 1.0 | Chrome notifications don't work in Opera Next - Original [issue 1248](https://code.google.com/p/foxtrick/issues/detail?id=1248) created by [minj](mailto:4mr.minj@gmail.com) on 2014-10-21T11:03:43.000Z:
https://dev.opera.com/blog/web-notifications-in-opera-developer-24/ | non_process | chrome notifications don t work in opera next original created by mailto minj gmail com on | 0 |
20,584 | 27,245,577,444 | IssuesEvent | 2023-02-22 01:32:30 | quark-engine/quark-engine | https://api.github.com/repos/quark-engine/quark-engine | closed | Prepare to release version v23.2.1 | work-in-progress issue-processing-state-06 | Update the version number in `__init__.py` for releasing the latest version of Quark.
In this version, the following changes will be included.
* #448
* #447
* #458
* #460
* #463
* #465 | 1.0 | Prepare to release version v23.2.1 - Update the version number in `__init__.py` for releasing the latest version of Quark.
In this version, the following changes will be included.
* #448
* #447
* #458
* #460
* #463
* #465 | process | prepare to release version update the version number in init py for releasing the latest version of quark in this version the following changes will be included | 1 |
1,620 | 4,235,791,833 | IssuesEvent | 2016-07-05 16:14:45 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Spawn process exit code: 3221225794 | child_process question windows | <!--
Thank you for reporting an issue. Please fill in the template below. If unsure
about something, just do as best as you're able.
Version: usually output of `node -v`
Platform: either `uname -a` output, or if Windows, version and 32 or 64-bit
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 5.10.1
* **Platform**: Windows 10 - 64-bit
* **Subsystem**: child_process.spawn
<!-- Enter your issue details below this comment. -->
I am currently trying to spawn a C++ program. However, when I try to spawn it I get the above mentioned error code: 3221225794. When I try to start this executable from the command line it runs just fine. Does anybody have experience with this type of error code? I am kind of dumbfounded with this issue as it was working last Thursday. | 1.0 | Spawn process exit code: 3221225794 - <!--
Thank you for reporting an issue. Please fill in the template below. If unsure
about something, just do as best as you're able.
Version: usually output of `node -v`
Platform: either `uname -a` output, or if Windows, version and 32 or 64-bit
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 5.10.1
* **Platform**: Windows 10 - 64-bit
* **Subsystem**: child_process.spawn
<!-- Enter your issue details below this comment. -->
I am currently trying to spawn a C++ program. However, when I try to spawn it I get the above mentioned error code: 3221225794. When I try to start this executable from the command line it runs just fine. Does anybody have experience with this type of error code? I am kind of dumbfounded with this issue as it was working last Thursday. | process | spawn process exit code thank you for reporting an issue please fill in the template below if unsure about something just do as best as you re able version usually output of node v platform either uname a output or if windows version and or bit subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform windows bit subsystem child process spawn i am currently trying to spawn a c program however when i try to spawn it i get the above mentioned error code when i try to start this executable from the command line it runs just fine does anybody have experience with this type of error code i am kind of dumbfounded with this issue as it was working last thursday | 1 |
14,867 | 18,276,190,264 | IssuesEvent | 2021-10-04 19:08:03 | 2i2c-org/pilot-hubs | https://api.github.com/repos/2i2c-org/pilot-hubs | closed | Run through our internal hub cost estimates | :label: team-process | Right now, we have very conservative estimates of how much labor it will take to run hub infrastructure. We should look at these estimates, and try to refine the numbers so that we can give reasonable prices.
# What goes into labor?
- Initial conversations about what is needed for a hub
- Initial deployment of the hub
- Customization of the hub beyond default environments
- Ongoing maintenance of hubs, that may or may not be tied directly to a single hub
- *Dedicated* maintenance for a given hub (e.g., being asked to change the environment)
- User support for a given hub (e.g., being asked a question about the hub, or a perceived outage)
# What hub types to consider
Below I'll list a few types of hubs, a brief description, and a **totally arbitrary** estimation of "days per month" for development and operations. We should discuss these days/month and get to a number that is more realistic!
| Hub Type | User | Description | Dev Days / Month | Ops Days / Month |
|-----------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------|------------------|------------------|
| Education Light | This is the most basic hub we have. Simple data science environment with minimal resources for students. | small or basic class | 0.1 | 0.25 |
| Education Full | This is a more complex educational environment (e.g., with more packages, autograding, etc) | complex or large class | 4 | 4 |
| Research Light | "basic data science environment". Think "The PyData Stack" but nothing extremely complex. | Lab/Team with modest funds or that needs a minimal envt. | 0.5 | 1 |
| Research Pangeo | research light + some more complex and scalable infrastructure like Dask Kubernetes | Lab/Team with larger datasets and more complex needs | 4 | 4 | | 1.0 | Run through our internal hub cost estimates - Right now, we have very conservative estimates of how much labor it will take to run hub infrastructure. We should look at these estimates, and try to refine the numbers so that we can give reasonable prices.
# What goes into labor?
- Initial conversations about what is needed for a hub
- Initial deployment of the hub
- Customization of the hub beyond default environments
- Ongoing maintenance of hubs, that may or may not be tied directly to a single hub
- *Dedicated* maintenance for a given hub (e.g., being asked to change the environment)
- User support for a given hub (e.g., being asked a question about the hub, or a perceived outage)
# What hub types to consider
Below I'll list a few types of hubs, a brief description, and a **totally arbitrary** estimation of "days per month" for development and operations. We should discuss these days/month and get to a number that is more realistic!
| Hub Type | User | Description | Dev Days / Month | Ops Days / Month |
|-----------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------|------------------|------------------|
| Education Light | This is the most basic hub we have. Simple data science environment with minimal resources for students. | small or basic class | 0.1 | 0.25 |
| Education Full | This is a more complex educational environment (e.g., with more packages, autograding, etc) | complex or large class | 4 | 4 |
| Research Light | "basic data science environment". Think "The PyData Stack" but nothing extremely complex. | Lab/Team with modest funds or that needs a minimal envt. | 0.5 | 1 |
| Research Pangeo | research light + some more complex and scalable infrastructure like Dask Kubernetes | Lab/Team with larger datasets and more complex needs | 4 | 4 | | process | run through our internal hub cost estimates right now we have very conservative estimates of how much labor it will take to run hub infrastructure we should look at these estimates and try to refine the numbers so that we can give reasonable prices what goes into labor initial conversations about what is needed for a hub initial deployment of the hub customization of the hub beyond default environments ongoing maintenance of hubs that may or may not be tied directly to a single hub dedicated maintenance for a given hub e g being asked to change the environment user support for a given hub e g being asked a question about the hub or a perceived outage what hub types to consider below i ll list a few types of hubs a brief description and a totally arbitrary estimation of days per month for development and operations we should discuss these days month and get to a number that is more realistic hub type user description dev days month ops days month education light this is the most basic hub we have simple data science environment with minimal resources for students small or basic class education full this is a more complex educational environment e g with more packages autograding etc complex or large class research light basic data science environment think the pydata stack but nothing extremely complex lab team with modest funds or that needs a minimal envt research pangeo research light some more complex and scalable infrastructure like dask kubernetes lab team with larger datasets and more complex needs | 1 |
153,921 | 19,708,657,423 | IssuesEvent | 2022-01-13 01:49:01 | artsking/linux-4.19.72_CVE-2020-14386 | https://api.github.com/repos/artsking/linux-4.19.72_CVE-2020-14386 | opened | WS-2021-0334 (High) detected in linux-yoctov5.4.51 | security vulnerability | ## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0334 (High) detected in linux-yoctov5.4.51 - ## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws high detected in linux ws high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files net netfilter nf synproxy core c net netfilter nf synproxy core c vulnerability details linux kernel in versions to is vulnerable to out of bounds when parsing tcp options publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
49,538 | 13,453,957,611 | IssuesEvent | 2020-09-09 02:25:36 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | [Desktop] update install extension text to remove review language | OS/Desktop feature/extensions l10n needs-discussion needs-text-change security | ## Description
<!--Provide a brief description of the issue-->
Current text when installing extensions implies a Brave review.
Text should read as:
"Brave does not review extensions for security and safety. Only install this extension if you trust the developer."
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Go to Extension Web Store https://chrome.google.com/webstore/category/extensions?hl=en-US
2. Select an extension to 'Add to Brave'
3. Click 'Add to Brave' and notice dialog.
## Actual result:
<img width="487" alt="Screen Shot 2020-08-12 at 9 18 01 PM" src="https://user-images.githubusercontent.com/5951041/90094075-dd7e0f80-dce1-11ea-886c-6577389ad30a.png">
## Expected result:

## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
See str.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? yes
- Can you reproduce this issue with the beta channel? yes
- Can you reproduce this issue with the nightly channel? yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Does the issue resolve itself when disabling Brave Rewards? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
## Miscellaneous Information:
Related prior issues:
https://github.com/brave/brave-browser/issues/1408
https://github.com/brave/brave-browser/issues/3231
cc: @karenkliu @fmarier @pes10k
| True | [Desktop] update install extension text to remove review language - ## Description
<!--Provide a brief description of the issue-->
Current text when installing extensions implies a Brave review.
Text should read as:
"Brave does not review extensions for security and safety. Only install this extension if you trust the developer."
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Go to Extension Web Store https://chrome.google.com/webstore/category/extensions?hl=en-US
2. Select an extension to 'Add to Brave'
3. Click 'Add to Brave' and notice dialog.
## Actual result:
<img width="487" alt="Screen Shot 2020-08-12 at 9 18 01 PM" src="https://user-images.githubusercontent.com/5951041/90094075-dd7e0f80-dce1-11ea-886c-6577389ad30a.png">
## Expected result:

## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
See str.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? yes
- Can you reproduce this issue with the beta channel? yes
- Can you reproduce this issue with the nightly channel? yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Does the issue resolve itself when disabling Brave Rewards? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
## Miscellaneous Information:
Related prior issues:
https://github.com/brave/brave-browser/issues/1408
https://github.com/brave/brave-browser/issues/3231
cc: @karenkliu @fmarier @pes10k
| non_process | update install extension text to remove review language description current text when installing extensions implies a brave review text should read as brave does not review extensions for security and safety only install this extension if you trust the developer steps to reproduce go to extension web store select an extension to add to brave click add to brave and notice dialog actual result img width alt screen shot at pm src expected result reproduces how often see str brave version brave version info version channel information can you reproduce this issue with the current release yes can you reproduce this issue with the beta channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information related prior issues cc karenkliu fmarier | 0 |
138,219 | 11,195,223,284 | IssuesEvent | 2020-01-03 05:20:56 | jojapoppa/fedoragold-wallet-electron | https://api.github.com/repos/jojapoppa/fedoragold-wallet-electron | closed | Thin wallet timeout (Internal Node Error) | bug completed - in testing in next release | Thin wallet times out and displays "Internal Node Error", and yet often still sends the coins. | 1.0 | Thin wallet timeout (Internal Node Error) - Thin wallet times out and displays "Internal Node Error", and yet often still sends the coins. | non_process | thin wallet timeout internal node error thin wallet times out and displays internal node error and yet often still sends the coins | 0 |
524 | 2,997,193,201 | IssuesEvent | 2015-07-23 05:05:58 | mitchellh/packer | https://api.github.com/repos/mitchellh/packer | closed | unknown configuration key: "atlas_url" | bug post-processor/atlas | Hey guys,
`* unknown configuration key: "atlas_url"` is an error I'm getting while trying to use the atlas post-processor as documented [here](https://www.packer.io/docs/post-processors/atlas.html)
I'm using packer 0.8.2
[Debug output](https://gist.github.com/samdunne/a1cf4a11d6782ddd73ea)
[Reproducible Packer JSON](https://gist.github.com/samdunne/7511f1a06e3fa2e2a518)
Let me know if you need more information. | 1.0 | unknown configuration key: "atlas_url" - Hey guys,
`* unknown configuration key: "atlas_url"` is an error I'm getting while trying to use the atlas post-processor as documented [here](https://www.packer.io/docs/post-processors/atlas.html)
I'm using packer 0.8.2
[Debug output](https://gist.github.com/samdunne/a1cf4a11d6782ddd73ea)
[Reproducible Packer JSON](https://gist.github.com/samdunne/7511f1a06e3fa2e2a518)
Let me know if you need more information. | process | unknown configuration key atlas url hey guys unknown configuration key atlas url is an error i m getting while trying to use the atlas post processor as documented i m using packer let me know if you need more information | 1 |
80,853 | 10,212,688,926 | IssuesEvent | 2019-08-14 20:06:12 | opensource2fa/Server | https://api.github.com/repos/opensource2fa/Server | closed | Documentation | Essential documentation | Specific JSON packets to send back and forth
Build instructions
How it runs
Description
Checkout/download instructions
.gitignore
Names
Issue tracker | 1.0 | Documentation - Specific JSON packets to send back and forth
Build instructions
How it runs
Description
Checkout/download instructions
.gitignore
Names
Issue tracker | non_process | documentation specific json packets to send back and forth build instructions how it runs description checkout download instructions gitignore names issue tracker | 0 |
16,162 | 20,599,215,013 | IssuesEvent | 2022-03-06 01:19:38 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | opened | QgsProcessingUtils.combineFields() gets confused when there is a conflict with field names | Processing Bug | ### What is the bug or the crash?
When the index `2` is appended to a conflicting name, we should also check that such a new name does not exist in the `fieldsB` list. Otherwise, the algorithm gets confused.
This only occurs if we don't use a prefix parameter value.
This is much clearer in the code snippet below.
### Steps to reproduce the issue
Just run this code snippet in the QGIS Python Console.
At the end, we expect a combined list with 5 fields, but we get 4.
```
field1A = QgsField('ID')
field1B = QgsField('FK')
fields1 = QgsFields()
fields1.append(field1A)
fields1.append(field1B)
field2A = QgsField('ID')
field2B = QgsField('ID_2')
field2C = QgsField('COUNT')
fields2 = QgsFields()
fields2.append(field2A)
fields2.append(field2B)
fields2.append(field2C)
combined = QgsProcessingUtils.combineFields(fields1, fields2)
print(len(combined)) # Prints '4'
```
### Versions
Tested in QGIS master.
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_ | 1.0 | QgsProcessingUtils.combineFields() gets confused when there is a conflict with field names - ### What is the bug or the crash?
When the index `2` is appended to a conflicting name, we should also check that such a new name does not exist in the `fieldsB` list. Otherwise, the algorithm gets confused.
This only occurs if we don't use a prefix parameter value.
This is much clearer in the code snippet below.
### Steps to reproduce the issue
Just run this code snippet in the QGIS Python Console.
At the end, we expect a combined list with 5 fields, but we get 4.
```
field1A = QgsField('ID')
field1B = QgsField('FK')
fields1 = QgsFields()
fields1.append(field1A)
fields1.append(field1B)
field2A = QgsField('ID')
field2B = QgsField('ID_2')
field2C = QgsField('COUNT')
fields2 = QgsFields()
fields2.append(field2A)
fields2.append(field2B)
fields2.append(field2C)
combined = QgsProcessingUtils.combineFields(fields1, fields2)
print(len(combined)) # Prints '4'
```
### Versions
Tested in QGIS master.
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_ | process | qgsprocessingutils combinefields gets confused when there is a conflict with field names what is the bug or the crash when the index is appended to a conflicting name we should also check that such a new name does not exist in the fieldsb list otherwise the algorithm gets confused this only occurs if we don t use a prefix parameter value this is much clearer in the code snippet below steps to reproduce the issue just run this code snippet in the qgis python console at the end we expect a combined list with fields but we get qgsfield id qgsfield fk qgsfields append append qgsfield id qgsfield id qgsfield count qgsfields append append append combined qgsprocessingutils combinefields print len combined prints versions tested in qgis master supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response | 1 |
5,399 | 8,231,042,757 | IssuesEvent | 2018-09-07 14:45:03 | GoogleCloudPlatform/golang-samples | https://api.github.com/repos/GoogleCloudPlatform/golang-samples | closed | README: build badge is red | priority: p3 type: process | I think we should just remove it?
Or have it be accurate and point to Kokoro? | 1.0 | README: build badge is red - I think we should just remove it?
Or have it be accurate and point to Kokoro? | process | readme build badge is red i think we should just remove it or have it be accurate and point to kokoro | 1 |
1,037 | 3,508,649,827 | IssuesEvent | 2016-01-08 18:52:12 | symfony/symfony | https://api.github.com/repos/symfony/symfony | closed | Strange behaviour of \Symfony\Component\Process\Process::start on MacOS | Process | Hi, i'm trying to run custom command in background from controller.
Here's my simple code
$cwd = '/path/to/project';
$command = new Process(
'bin/console admin:settings_test', $cwd, null, null, 3600
);
$command->start();
When I run this, process isn't starting. I've tried to pass callback function to start – no effect.
So i've changed start method like this:
$this->process = proc_open($commandline, $descriptors, $this->processPipes->pipes, $this->cwd, $this->env, $this->options);
var_dump($this->process); die(); // this line was added
It's interesting, that i've got no response from var_dump function and my process finally started!
The most strange thing, is if I'll move var_dump and die calls to few lines down, I will see var_dump output and my command will not be started.
Am I doing something wrong?
So for now I should use my own custom Process class, extending vendor one to avoid this issue, with overriding almost the whole class. | 1.0 | Strange behaviour of \Symfony\Component\Process\Process::start on MacOS - Hi, i'm trying to run custom command in background from controller.
Here's my simple code
$cwd = '/path/to/project';
$command = new Process(
'bin/console admin:settings_test', $cwd, null, null, 3600
);
$command->start();
When I run this, process isn't starting. I've tried to pass callback function to start – no effect.
So i've changed start method like this:
$this->process = proc_open($commandline, $descriptors, $this->processPipes->pipes, $this->cwd, $this->env, $this->options);
var_dump($this->process); die(); // this line was added
It's interesting, that i've got no response from var_dump function and my process finally started!
The most strange thing, is if I'll move var_dump and die calls to few lines down, I will see var_dump output and my command will not be started.
Am I doing something wrong?
So for now I should use my own custom Process class, extending vendor one to avoid this issue, with overriding almost the whole class. | process | strange behaviour of symfony component process process start on macos hi i m trying to run custom command in background from controller here s my simple code cwd path to project command new process bin console admin settings test cwd null null command start when i run this process isn t starting i ve tried to pass callback function to start – no effect so i ve changed start method like this this process proc open commandline descriptors this processpipes pipes this cwd this env this options var dump this process die this line was added it s interesting that i ve got no response from var dump function and my process finally started the most strange thing is if i ll move var dump and die calls to few lines down i will see var dump output and my command will not be started am i doing something wrong so for now i should use my own custom process class extending vendor one to avoid this issue with overriding almost the whole class | 1 |
279,153 | 8,658,029,351 | IssuesEvent | 2018-11-27 23:12:21 | supergiant/control | https://api.github.com/repos/supergiant/control | closed | 2.0 UI: Add node only returns machine types if the user has a cloud account named 'aws' | High Priority | 
In addition, it looks like we have no information on AWS clusters to denote the AZ that cluster is in, so the code for this feature just uses the first AZ in the array that comes back
---
### More Info
**Reported by:** Joe Kane (joe@qbox.io)
**Source URL**: [http://localhost:4200/clusters/4a86dbca/add-node](http://localhost:4200/clusters/4a86dbca/add-node)
<table><tr><td><strong>Browser</strong></td><td>Chrome 70.0.3538.102</td></tr><tr><td><strong>Screen Size</strong></td><td>1366 x 768</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.14.1</td></tr><tr><td><strong>Viewport Size</strong></td><td>1680 x 971</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@2x</td></tr></table> | 1.0 | 2.0 UI: Add node only returns machine types if the user has a cloud account named 'aws' - 
In addition, it looks like we have no information on AWS clusters to denote the AZ that cluster is in, so the code for this feature just uses the first AZ in the array that comes back
---
### More Info
**Reported by:** Joe Kane (joe@qbox.io)
**Source URL**: [http://localhost:4200/clusters/4a86dbca/add-node](http://localhost:4200/clusters/4a86dbca/add-node)
<table><tr><td><strong>Browser</strong></td><td>Chrome 70.0.3538.102</td></tr><tr><td><strong>Screen Size</strong></td><td>1366 x 768</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.14.1</td></tr><tr><td><strong>Viewport Size</strong></td><td>1680 x 971</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@2x</td></tr></table> | non_process | ui add node only returns machine types if the user has a cloud account named aws in addition it looks like we have no information on aws clusters to denote the az that cluster is in so the code for this feature just uses the first az in the array that comes back more info reported by joe kane joe qbox io source url browser chrome screen size x os os x viewport size x zoom level pixel ratio | 0 |
18,496 | 24,551,005,922 | IssuesEvent | 2022-10-12 12:36:49 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] [Offline indicator] Participant is navigating to the Study lists screen when participant clicks on Review button in the following scenario | Bug P1 iOS Process: Fixed Process: Tested dev | Steps:
1. Signup or Sign in to the mobile
2. Enroll to the study
3. in SB, Update the consent for the enrolled participant
4. Click on the Enrolled study in the mobile app
5. Turn off the internet
6. Click on the Review and observe
AR: Participant is navigating the Study list screen and an offline error message is getting displayed
ER: Offline error message should get displayed | 2.0 | [iOS] [Offline indicator] Participant is navigating to the Study lists screen when participant clicks on Review button in the following scenario - Steps:
1. Signup or Sign in to the mobile
2. Enroll to the study
3. in SB, Update the consent for the enrolled participant
4. Click on the Enrolled study in the mobile app
5. Turn off the internet
6. Click on the Review and observe
AR: Participant is navigating the Study list screen and an offline error message is getting displayed
ER: Offline error message should get displayed | process | participant is navigating to the study lists screen when participant clicks on review button in the following scenario steps signup or sign in to the mobile enroll to the study in sb update the consent for the enrolled participant click on the enrolled study in the mobile app turn off the internet click on the review and observe ar participant is navigating the study list screen and an offline error message is getting displayed er offline error message should get displayed | 1 |
27,743 | 2,695,474,290 | IssuesEvent | 2015-04-02 06:12:38 | UnifiedViews/Plugin-DevEnv | https://api.github.com/repos/UnifiedViews/Plugin-DevEnv | closed | Tabs are not localizable | priority: High resolution: fixed severity: bug status: resolved | Support for tabs in DPU configurations (introduced in f90c61af749f7fc9a8c53cc0e7f58ee539511ce3, renamed in ba7d066c58a5e66e520e577f2e76dec9c80310dc, finaly merged into develop in 7c5d9443ad9163838f09ed5b43763189510a370c) are not localisable)
| 1.0 | Tabs are not localizable - Support for tabs in DPU configurations (introduced in f90c61af749f7fc9a8c53cc0e7f58ee539511ce3, renamed in ba7d066c58a5e66e520e577f2e76dec9c80310dc, finaly merged into develop in 7c5d9443ad9163838f09ed5b43763189510a370c) are not localisable)
| non_process | tabs are not localizable support for tabs in dpu configurations introduced in renamed in finaly merged into develop in are not localisable | 0 |
13,967 | 16,742,848,499 | IssuesEvent | 2021-06-11 12:06:24 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android] Resources are getting displayed in mobile app irrespective of scheduled date | Android Bug P1 Process: Fixed Process: Tested QA Process: Tested dev | Resources are getting displayed in the mobile app irrespective of the scheduled date
Eg:
**Study name:** Mobile App Open study(Original)
**Resources**
1. Resource with Custom schedule
2. Resource scheduled on 14/6/21 | 3.0 | [Android] Resources are getting displayed in mobile app irrespective of scheduled date - Resources are getting displayed in the mobile app irrespective of the scheduled date
Eg:
**Study name:** Mobile App Open study(Original)
**Resources**
1. Resource with Custom schedule
2. Resource scheduled on 14/6/21 | process | resources are getting displayed in mobile app irrespective of scheduled date resources are getting displayed in the mobile app irrespective of the scheduled date eg study name mobile app open study original resources resource with custom schedule resource scheduled on | 1 |
318,200 | 27,294,421,618 | IssuesEvent | 2023-02-23 19:04:22 | pulp/pulp_rpm | https://api.github.com/repos/pulp/pulp_rpm | opened | Revisit all API list calls to deal with the pagination | Task Tests Triage-Needed | A generic fixture will be added to pulpcore. Concrete implementation will be done on the plugin side.
In this task, we will revisit all calls, like: `rpm_package_api.list(repository_version=repository.latest_version_href).results`. De-pagination should be done in the background in case of multiple items are being returned from the API.
```
def get_upstream_distributions(self, labels=None):
if labels:
params = {"pulp_label_select": labels}
else:
params = {}
offset = 0
list_size = 100
while True:
distributions = self.distribution_ctx_cls(self.pulp_ctx).list(list_size, offset, params)
for distro in distributions:
yield distro
if len(distributions) < list_size:
break
offset += list_size
``` | 1.0 | Revisit all API list calls to deal with the pagination - A generic fixture will be added to pulpcore. Concrete implementation will be done on the plugin side.
In this task, we will revisit all calls, like: `rpm_package_api.list(repository_version=repository.latest_version_href).results`. De-pagination should be done in the background in case of multiple items are being returned from the API.
```
def get_upstream_distributions(self, labels=None):
if labels:
params = {"pulp_label_select": labels}
else:
params = {}
offset = 0
list_size = 100
while True:
distributions = self.distribution_ctx_cls(self.pulp_ctx).list(list_size, offset, params)
for distro in distributions:
yield distro
if len(distributions) < list_size:
break
offset += list_size
``` | non_process | revisit all api list calls to deal with the pagination a generic fixture will be added to pulpcore concrete implementation will be done on the plugin side in this task we will revisit all calls like rpm package api list repository version repository latest version href results de pagination should be done in the background in case of multiple items are being returned from the api def get upstream distributions self labels none if labels params pulp label select labels else params offset list size while true distributions self distribution ctx cls self pulp ctx list list size offset params for distro in distributions yield distro if len distributions list size break offset list size | 0 |
116,779 | 9,883,762,635 | IssuesEvent | 2019-06-24 20:16:07 | pachyderm/pachyderm | https://api.github.com/repos/pachyderm/pachyderm | closed | 'make lint' flakes in CI | testing | Example failure (went away after rerunning the misc tests):
```
Running misc test suite
# golang.org/x/tools/go/internal/gcimporter
../../../golang.org/x/tools/go/internal/gcimporter/iimport.go:540:10: undefined: types.NewInterface2
make: *** [lint] Error 2
```
I wonder if some of our test failures are due to inconsistency in the test environment | 1.0 | 'make lint' flakes in CI - Example failure (went away after rerunning the misc tests):
```
Running misc test suite
# golang.org/x/tools/go/internal/gcimporter
../../../golang.org/x/tools/go/internal/gcimporter/iimport.go:540:10: undefined: types.NewInterface2
make: *** [lint] Error 2
```
I wonder if some of our test failures are due to inconsistency in the test environment | non_process | make lint flakes in ci example failure went away after rerunning the misc tests running misc test suite golang org x tools go internal gcimporter golang org x tools go internal gcimporter iimport go undefined types make error i wonder if some of our test failures are due to inconsistency in the test environment | 0 |
1,583 | 4,175,201,847 | IssuesEvent | 2016-06-21 16:08:15 | kerubistan/kerub | https://api.github.com/repos/kerubistan/kerub | closed | iscsi share virtual storage | component:data processing enhancement priority: normal | Planner should be able to share virtual disks with iscsi if the VM can run on another host | 1.0 | iscsi share virtual storage - Planner should be able to share virtual disks with iscsi if the VM can run on another host | process | iscsi share virtual storage planner should be able to share virtual disks with iscsi if the vm can run on another host | 1 |
13,906 | 16,664,715,999 | IssuesEvent | 2021-06-07 00:09:47 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | AMI release for 21.05 | 0.kind: packaging request 6.topic: release process | **Project description**
Upload the AMIs to amazon.
This has been requested via issues at least half of the time, so it doesn't seem to be part of the release process. We do upload them each time though, so not doing it in the release just wastes community energy with people trying to figure out who to contact etc.
- [x] add AMI upload to release process
- [ ] upload 21.05 AMIs
- [ ] same for aarch64 images #52779
Previous requests:
- 20.09 https://github.com/NixOS/nixpkgs/issues/72132
- 20.03 https://github.com/NixOS/nixpkgs/issues/85857
- 18.09 https://github.com/NixOS/nixpkgs/issues/48222
- 17.09 https://github.com/NixOS/nixpkgs/issues/29976
cc @jonringer @edolstra | 1.0 | AMI release for 21.05 - **Project description**
Upload the AMIs to amazon.
This has been requested via issues at least half of the time, so it doesn't seem to be part of the release process. We do upload them each time though, so not doing it in the release just wastes community energy with people trying to figure out who to contact etc.
- [x] add AMI upload to release process
- [ ] upload 21.05 AMIs
- [ ] same for aarch64 images #52779
Previous requests:
- 20.09 https://github.com/NixOS/nixpkgs/issues/72132
- 20.03 https://github.com/NixOS/nixpkgs/issues/85857
- 18.09 https://github.com/NixOS/nixpkgs/issues/48222
- 17.09 https://github.com/NixOS/nixpkgs/issues/29976
cc @jonringer @edolstra | process | ami release for project description upload the amis to amazon this has been requested via issues at least half of the time so it doesn t seem to be part of the release process we do upload them each time though so not doing it in the release just wastes community energy with people trying to figure out who to contact etc add ami upload to release process upload amis same for images previous requests cc jonringer edolstra | 1 |
656,596 | 21,768,219,202 | IssuesEvent | 2022-05-13 06:06:05 | PyCQA/pylint | https://api.github.com/repos/PyCQA/pylint | closed | Pylint does not respect ignores in `--recursive=y` mode | Bug 🪳 High priority | ### Bug description
Pylint does not respect the `--ignore`, `--ignore-paths`, or `--ignore-patterns` setting when running in recursive mode. This contradicts the documentation and seriously compromises the usefulness of recursive mode.
### Configuration
_No response_
### Command used
```shell
### .a/foo.py
# import re
### bar.py
# import re
pylint --recursive=y .
pylint --recursive=y --ignore=.a .
pylint --recursive=y --ignore-paths=.a .
pylint --recursive=y --ignore-patterns="^\.a" .
```
### Pylint output
All of these commands give the same output:
```
************* Module bar
bar.py:1:0: C0104: Disallowed name "bar" (disallowed-name)
bar.py:1:0: C0114: Missing module docstring (missing-module-docstring)
bar.py:1:0: W0611: Unused import re (unused-import)
************* Module foo
.a/foo.py:1:0: C0104: Disallowed name "foo" (disallowed-name)
.a/foo.py:1:0: C0114: Missing module docstring (missing-module-docstring)
.a/foo.py:1:0: W0611: Unused import re (unused-import)
```
### Expected behavior
`foo.py` should be ignored by all of the above commands, because it is in an ignored directory (even the first command with no ignore setting should skip it, since the default value of `ignore-patterns` is `"^\."`.
For reference, the docs for the various ignore settings from `pylint --help`:
```
--ignore=<file>[,<file>...]
Files or directories to be skipped. They should be
base names, not paths. [current: CVS]
--ignore-patterns=<pattern>[,<pattern>...]
Files or directories matching the regex patterns are
skipped. The regex matches against base names, not
paths. The default value ignores emacs file locks
[current: ^\.#]
--ignore-paths=<pattern>[,<pattern>...]
Add files or directories matching the regex patterns
to the ignore-list. The regex matches against paths
and can be in Posix or Windows format. [current: none]
```
### Pylint version
```shell
pylint 2.13.7
python 3.9.12
```
### OS / Environment
_No response_
### Additional dependencies
_No response_ | 1.0 | Pylint does not respect ignores in `--recursive=y` mode - ### Bug description
Pylint does not respect the `--ignore`, `--ignore-paths`, or `--ignore-patterns` setting when running in recursive mode. This contradicts the documentation and seriously compromises the usefulness of recursive mode.
### Configuration
_No response_
### Command used
```shell
### .a/foo.py
# import re
### bar.py
# import re
pylint --recursive=y .
pylint --recursive=y --ignore=.a .
pylint --recursive=y --ignore-paths=.a .
pylint --recursive=y --ignore-patterns="^\.a" .
```
### Pylint output
All of these commands give the same output:
```
************* Module bar
bar.py:1:0: C0104: Disallowed name "bar" (disallowed-name)
bar.py:1:0: C0114: Missing module docstring (missing-module-docstring)
bar.py:1:0: W0611: Unused import re (unused-import)
************* Module foo
.a/foo.py:1:0: C0104: Disallowed name "foo" (disallowed-name)
.a/foo.py:1:0: C0114: Missing module docstring (missing-module-docstring)
.a/foo.py:1:0: W0611: Unused import re (unused-import)
```
### Expected behavior
`foo.py` should be ignored by all of the above commands, because it is in an ignored directory (even the first command with no ignore setting should skip it, since the default value of `ignore-patterns` is `"^\."`.
For reference, the docs for the various ignore settings from `pylint --help`:
```
--ignore=<file>[,<file>...]
Files or directories to be skipped. They should be
base names, not paths. [current: CVS]
--ignore-patterns=<pattern>[,<pattern>...]
Files or directories matching the regex patterns are
skipped. The regex matches against base names, not
paths. The default value ignores emacs file locks
[current: ^\.#]
--ignore-paths=<pattern>[,<pattern>...]
Add files or directories matching the regex patterns
to the ignore-list. The regex matches against paths
and can be in Posix or Windows format. [current: none]
```
### Pylint version
```shell
pylint 2.13.7
python 3.9.12
```
### OS / Environment
_No response_
### Additional dependencies
_No response_ | non_process | pylint does not respect ignores in recursive y mode bug description pylint does not respect the ignore ignore paths or ignore patterns setting when running in recursive mode this contradicts the documentation and seriously compromises the usefulness of recursive mode configuration no response command used shell a foo py import re bar py import re pylint recursive y pylint recursive y ignore a pylint recursive y ignore paths a pylint recursive y ignore patterns a pylint output all of these commands give the same output module bar bar py disallowed name bar disallowed name bar py missing module docstring missing module docstring bar py unused import re unused import module foo a foo py disallowed name foo disallowed name a foo py missing module docstring missing module docstring a foo py unused import re unused import expected behavior foo py should be ignored by all of the above commands because it is in an ignored directory even the first command with no ignore setting should skip it since the default value of ignore patterns is for reference the docs for the various ignore settings from pylint help ignore files or directories to be skipped they should be base names not paths ignore patterns files or directories matching the regex patterns are skipped the regex matches against base names not paths the default value ignores emacs file locks ignore paths add files or directories matching the regex patterns to the ignore list the regex matches against paths and can be in posix or windows format pylint version shell pylint python os environment no response additional dependencies no response | 0 |
236,740 | 26,047,892,880 | IssuesEvent | 2022-12-22 15:52:56 | aws/eks-distro-build-tooling | https://api.github.com/repos/aws/eks-distro-build-tooling | closed | New Golang Security Announcement: [security] Go 1.19.4 and Go 1.18.9 are released | security golang on-call external | Go to [the Golang group](https://groups.google.com/g/golang-announce/search?q=%5Bsecurity%5D) to view the announcement.
Follow the on-call runbook to backport fixes to all supported Golang versions. | True | New Golang Security Announcement: [security] Go 1.19.4 and Go 1.18.9 are released - Go to [the Golang group](https://groups.google.com/g/golang-announce/search?q=%5Bsecurity%5D) to view the announcement.
Follow the on-call runbook to backport fixes to all supported Golang versions. | non_process | new golang security announcement go and go are released go to to view the announcement follow the on call runbook to backport fixes to all supported golang versions | 0 |
807,332 | 29,995,841,602 | IssuesEvent | 2023-06-26 05:24:13 | KingSupernova31/RulesGuru | https://api.github.com/repos/KingSupernova31/RulesGuru | opened | Talk to Kyle Ryc about their phone issues | bug medium priority | Canadian L2 Kyle Ryc was telling me about several issues they were having with RG on their phone. I or someone else needs to reach out to them, get a list of the issues, and fix them. | 1.0 | Talk to Kyle Ryc about their phone issues - Canadian L2 Kyle Ryc was telling me about several issues they were having with RG on their phone. I or someone else needs to reach out to them, get a list of the issues, and fix them. | non_process | talk to kyle ryc about their phone issues canadian kyle ryc was telling me about several issues they were having with rg on their phone i or someone else needs to reach out to them get a list of the issues and fix them | 0 |
81,996 | 23,640,645,754 | IssuesEvent | 2022-08-25 16:43:41 | dotnet/arcade | https://api.github.com/repos/dotnet/arcade | closed | Build failed: dotnet-arcade-validation-official/main #20220823.2 | Build Failed | Build [#20220823.2](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1962728) failed
## :x: : internal / dotnet-arcade-validation-official failed
### Summary
**Finished** - Wed, 24 Aug 2022 02:15:32 GMT
**Duration** - 136 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Validate Publishing
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1962728/logs/362) - PowerShell exited with code '1'.
### Changes
- [f39537bc](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/f39537bc513b6be2dc2f0fd18dfc085e092ac128) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220823.2 (#3314)
| 1.0 | Build failed: dotnet-arcade-validation-official/main #20220823.2 - Build [#20220823.2](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1962728) failed
## :x: : internal / dotnet-arcade-validation-official failed
### Summary
**Finished** - Wed, 24 Aug 2022 02:15:32 GMT
**Duration** - 136 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Validate Publishing
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1962728/logs/362) - PowerShell exited with code '1'.
### Changes
- [f39537bc](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/f39537bc513b6be2dc2f0fd18dfc085e092ac128) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220823.2 (#3314)
| non_process | build failed dotnet arcade validation official main build failed x internal dotnet arcade validation official failed summary finished wed aug gmt duration minutes requested for dotnet bot reason batchedci details validate publishing x powershell exited with code changes dotnet maestro update dependencies from build | 0 |
212,910 | 16,487,564,940 | IssuesEvent | 2021-05-24 20:27:58 | WormBase/wormcells-viz | https://api.github.com/repos/WormBase/wormcells-viz | closed | Restrict list of genes returned by the autocomplete to the set in the matrix | ready_to_test | Add a new api endpoint to retrieve the list of valid genes | 1.0 | Restrict list of genes returned by the autocomplete to the set in the matrix - Add a new api endpoint to retrieve the list of valid genes | non_process | restrict list of genes returned by the autocomplete to the set in the matrix add a new api endpoint to retrieve the list of valid genes | 0 |
75,453 | 20,820,673,872 | IssuesEvent | 2022-03-18 15:04:12 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] DockerYmlTestSuiteIT test {p0=/10_info/Info} failing | :Delivery/Build >test-failure Team:Delivery | **Build scan:**
https://gradle-enterprise.elastic.co/s/crba5pco4hr2q/tests/:distribution:docker:integTest/org.elasticsearch.docker.test.DockerYmlTestSuiteIT/test%20%7Bp0=%2F10_info%2FInfo%7D
**Reproduction line:**
`./gradlew ':distribution:docker:integTest' --tests "org.elasticsearch.docker.test.DockerYmlTestSuiteIT.test {p0=/10_info/Info}" -Dtests.seed=252CB7139FB98A25 -Dtests.locale=id -Dtests.timezone=ACT -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.docker.test.DockerYmlTestSuiteIT&tests.test=test%20%7Bp0%3D/10_info/Info%7D
**Failure excerpt:**
```
org.elasticsearch.client.ResponseException: method [DELETE], host [https://localhost:49155], URI [*,-.ds-ilm-history-*?expand_wildcards=open%2Cclosed%2Chidden], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"index [.ds-.logs-deprecation.elasticsearch-default-2022.02.28-000001] is the write index for data stream [.logs-deprecation.elasticsearch-default] and cannot be deleted"}],"type":"illegal_argument_exception","reason":"index [.ds-.logs-deprecation.elasticsearch-default-2022.02.28-000001] is the write index for data stream [.logs-deprecation.elasticsearch-default] and cannot be deleted"},"status":400}
at __randomizedtesting.SeedInfo.seed([252CB7139FB98A25:AD7888C93145E7DD]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:346)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:312)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.test.rest.ESRestTestCase.wipeAllIndices(ESRestTestCase.java:935)
at org.elasticsearch.test.rest.ESRestTestCase.wipeCluster(ESRestTestCase.java:672)
at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] DockerYmlTestSuiteIT test {p0=/10_info/Info} failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/crba5pco4hr2q/tests/:distribution:docker:integTest/org.elasticsearch.docker.test.DockerYmlTestSuiteIT/test%20%7Bp0=%2F10_info%2FInfo%7D
**Reproduction line:**
`./gradlew ':distribution:docker:integTest' --tests "org.elasticsearch.docker.test.DockerYmlTestSuiteIT.test {p0=/10_info/Info}" -Dtests.seed=252CB7139FB98A25 -Dtests.locale=id -Dtests.timezone=ACT -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.docker.test.DockerYmlTestSuiteIT&tests.test=test%20%7Bp0%3D/10_info/Info%7D
**Failure excerpt:**
```
org.elasticsearch.client.ResponseException: method [DELETE], host [https://localhost:49155], URI [*,-.ds-ilm-history-*?expand_wildcards=open%2Cclosed%2Chidden], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"index [.ds-.logs-deprecation.elasticsearch-default-2022.02.28-000001] is the write index for data stream [.logs-deprecation.elasticsearch-default] and cannot be deleted"}],"type":"illegal_argument_exception","reason":"index [.ds-.logs-deprecation.elasticsearch-default-2022.02.28-000001] is the write index for data stream [.logs-deprecation.elasticsearch-default] and cannot be deleted"},"status":400}
at __randomizedtesting.SeedInfo.seed([252CB7139FB98A25:AD7888C93145E7DD]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:346)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:312)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.test.rest.ESRestTestCase.wipeAllIndices(ESRestTestCase.java:935)
at org.elasticsearch.test.rest.ESRestTestCase.wipeCluster(ESRestTestCase.java:672)
at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
``` | non_process | dockerymltestsuiteit test info info failing build scan reproduction line gradlew distribution docker integtest tests org elasticsearch docker test dockerymltestsuiteit test info info dtests seed dtests locale id dtests timezone act druntime java applicable branches master reproduces locally no failure history failure excerpt org elasticsearch client responseexception method host uri status line error root cause is the write index for data stream and cannot be deleted type illegal argument exception reason index is the write index for data stream and cannot be deleted status at randomizedtesting seedinfo seed at org elasticsearch client restclient convertresponse restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch test rest esresttestcase wipeallindices esresttestcase java at org elasticsearch test rest esresttestcase wipecluster esresttestcase java at org elasticsearch test rest esresttestcase cleanupcluster esresttestcase java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 0 |
17,514 | 23,327,810,478 | IssuesEvent | 2022-08-08 23:45:04 | apache/arrow-rs | https://api.github.com/repos/apache/arrow-rs | opened | Release Arrow `XXX` (next release after `20.0.0`) | development-process | Follow on from https://github.com/apache/arrow-rs/issues/2172
* Planned Release Candidate: 2022-08-19
* Planned Release and Publish to crates.io: 2022-08-22
Items:
- [] Prepare CHANGELOG and version
- [ ] Create release candidate
- [ ] Release candidate approved
- [ ] Release to crates.io
- [ ] Draft update to DataFusion:
See full list here:
https://github.com/apache/arrow-rs/compare/20.0.0...master
| 1.0 | Release Arrow `XXX` (next release after `20.0.0`) - Follow on from https://github.com/apache/arrow-rs/issues/2172
* Planned Release Candidate: 2022-08-19
* Planned Release and Publish to crates.io: 2022-08-22
Items:
- [] Prepare CHANGELOG and version
- [ ] Create release candidate
- [ ] Release candidate approved
- [ ] Release to crates.io
- [ ] Draft update to DataFusion:
See full list here:
https://github.com/apache/arrow-rs/compare/20.0.0...master
| process | release arrow xxx next release after follow on from planned release candidate planned release and publish to crates io items prepare changelog and version create release candidate release candidate approved release to crates io draft update to datafusion see full list here | 1 |
206,388 | 16,040,260,957 | IssuesEvent | 2021-04-22 06:53:48 | amzn/selling-partner-api-docs | https://api.github.com/repos/amzn/selling-partner-api-docs | opened | Questions about SP-API applications | documentation enhancement request | Dear Amazon,
I have three questions, please help answer them :
1. If I register an application. SPA, on Amazon UK site, can sellers in other regions authorize this application?
2. Can I use the same aws account to get IAM ARN if I Create different SPI applications?
3. How do we build multiple "OAuth URLs" for sellers in different regions if we can only fill in one auothrized "OAuth URL" while registering an application.
Looking forward to your reply | 1.0 | Questions about SP-API applications - Dear Amazon,
I have three questions, please help answer them :
1. If I register an application. SPA, on Amazon UK site, can sellers in other regions authorize this application?
2. Can I use the same aws account to get IAM ARN if I Create different SPI applications?
3. How do we build multiple "OAuth URLs" for sellers in different regions if we can only fill in one auothrized "OAuth URL" while registering an application.
Looking forward to your reply | non_process | questions about sp api applications dear amazon, i have three questions please help answer them : if i register an application spa on amazon uk site can sellers in other regions authorize this application can i use the same aws account to get iam arn if i create different spi applications how do we build multiple oauth urls for sellers in different regions if we can only fill in one auothrized oauth url while registering an application looking forward to your reply | 0 |
2,511 | 5,284,268,663 | IssuesEvent | 2017-02-07 23:43:37 | frc4571/FRC2017Robot | https://api.github.com/repos/frc4571/FRC2017Robot | closed | Determine distance from target based on camera inputs | vision-processing | Useful first steps could be found here:
http://wpilib.screenstepslive.com/s/4485/m/24194/l/288985-identifying-and-processing-the-targets
| 1.0 | Determine distance from target based on camera inputs - Useful first steps could be found here:
http://wpilib.screenstepslive.com/s/4485/m/24194/l/288985-identifying-and-processing-the-targets
| process | determine distance from target based on camera inputs useful first steps could be found here | 1 |
100,384 | 8,737,807,141 | IssuesEvent | 2018-12-12 00:03:07 | equella/Equella | https://api.github.com/repos/equella/Equella | closed | Turning off New UI doesn't work until manual refresh | Ready for Testing Unreleased bug newUI regression | **Describe the bug**
There is a regression in 2018.2 beta where if you turn off the New UI it has no effect until you go to a page other than settings, and press F5 (browser refresh).
In 6.6 Stable, after changing the setting I could click on 'Dashboard' and the New UI would be turned off.
**To Reproduce**
Steps to reproduce the behaviour (assuming you're on a server which already has the new UI enabled):
1. Go to Settings > UI
2. Click the toggle for 'Enable new UI' so that it is *off*
3. Click on the 'Dashboard' menu item - observe that you're still in the new UI
4. Press F5 to refresh your browser - observe that now you in the old UI
**Expected behaviour**
When I arrive at the dashboard in step 3 above I expect to see the system in the old UI.
**Platform:**
- OpenEquella Version: 2018.2-beta
- OS: Ubuntu server, Debian client
- Browser Firefox
**Additional context**
I've confirmed this works in 6.6-Stable.
| 1.0 | Turning off New UI doesn't work until manual refresh - **Describe the bug**
There is a regression in 2018.2 beta where if you turn off the New UI it has no effect until you go to a page other than settings, and press F5 (browser refresh).
In 6.6 Stable, after changing the setting I could click on 'Dashboard' and the New UI would be turned off.
**To Reproduce**
Steps to reproduce the behaviour (assuming you're on a server which already has the new UI enabled):
1. Go to Settings > UI
2. Click the toggle for 'Enable new UI' so that it is *off*
3. Click on the 'Dashboard' menu item - observe that you're still in the new UI
4. Press F5 to refresh your browser - observe that now you in the old UI
**Expected behaviour**
When I arrive at the dashboard in step 3 above I expect to see the system in the old UI.
**Platform:**
- OpenEquella Version: 2018.2-beta
- OS: Ubuntu server, Debian client
- Browser Firefox
**Additional context**
I've confirmed this works in 6.6-Stable.
| non_process | turning off new ui doesn t work until manual refresh describe the bug there is a regression in beta where if you turn off the new ui it has no effect until you go to a page other than settings and press browser refresh in stable after changing the setting i could click on dashboard and the new ui would be turned off to reproduce steps to reproduce the behaviour assuming you re on a server which already has the new ui enabled go to settings ui click the toggle for enable new ui so that it is off click on the dashboard menu item observe that you re still in the new ui press to refresh your browser observe that now you in the old ui expected behaviour when i arrive at the dashboard in step above i expect to see the system in the old ui platform openequella version beta os ubuntu server debian client browser firefox additional context i ve confirmed this works in stable | 0 |
593,770 | 18,016,447,900 | IssuesEvent | 2021-09-16 14:23:33 | EscolaLMS/Courses | https://api.github.com/repos/EscolaLMS/Courses | closed | poc. video hls with ffmpeg | priority high high priority | - [x] task for queue
- [x] save data in topic json attr
- [x] queue
- [x] tests
- [x] endpoint for current progress | 2.0 | poc. video hls with ffmpeg - - [x] task for queue
- [x] save data in topic json attr
- [x] queue
- [x] tests
- [x] endpoint for current progress | non_process | poc video hls with ffmpeg task for queue save data in topic json attr queue tests endpoint for current progress | 0 |
316,618 | 9,652,518,111 | IssuesEvent | 2019-05-18 17:52:33 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | closed | [LOCALIZATION] | Son of %evil god% | :beetle: bug - localization :scroll: :grey_exclamation: priority medium | **Mod Version**
Master Branch
**Please explain your issue in as much detail as possible:**
Many legion/old gods/necromantic rulers are sons of order/light/life
**Upload screenshots of the problem localization:**
<details>

</details> | 1.0 | [LOCALIZATION] | Son of %evil god% - **Mod Version**
Master Branch
**Please explain your issue in as much detail as possible:**
Many legion/old gods/necromantic rulers are sons of order/light/life
**Upload screenshots of the problem localization:**
<details>

</details> | non_process | son of evil god mod version master branch please explain your issue in as much detail as possible many legion old gods necromantic rulers are sons of order light life upload screenshots of the problem localization | 0 |
305,113 | 9,359,421,144 | IssuesEvent | 2019-04-02 06:48:19 | facebook/create-react-app | https://api.github.com/repos/facebook/create-react-app | closed | Calls to 'console.log' are not allowed | priority: low (ignored issue template) | <tslint.json> It's not useful for git-submodule

I had already set the options for tslint

| 1.0 | Calls to 'console.log' are not allowed - <tslint.json> It's not useful for git-submodule

I had already set the options for tslint

| non_process | calls to console log are not allowed it s not useful for git submodule i had already set the options for tslint | 0 |
263,772 | 8,301,765,038 | IssuesEvent | 2018-09-21 12:35:36 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.galaxus.ch - see bug description | browser-firefox-mobile priority-normal | <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.galaxus.ch/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: my account, pinned items, and cart links at top of screen do not work
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.galaxus.ch - see bug description - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.galaxus.ch/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: my account, pinned items, and cart links at top of screen do not work
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description my account pinned items and cart links at top of screen do not work steps to reproduce from with ❤️ | 0 |
2,971 | 12,906,383,114 | IssuesEvent | 2020-07-15 01:28:27 | wekan/wekan | https://api.github.com/repos/wekan/wekan | reopened | Feature request: if-this-then-that-style support (i.e. recurring cards) | Feature:Automation | Being able to set rules initiating a specific action (i.e. move a card) in case that something else happens (i.e. if card is past due date) would make Wekan very suitable for several professional use-cases. Implementing this would top existing possibilities that, only Odoo partially covers (and which aren't really present in Trello). Typical use cases:
**Recreate a card yearly on 1st June:** i.e. company ABC needs to perform periodic maintainance on a machine to adhere to standards.
**Send mail if card is moved:** i.e. if the above card is recreated, and the safety manger of ABC moves it from TODO to DOING, send automatic maintenance request to company XYZ and a copy to the safety manager.
**Create a card if some other card past due date:** lets assume the recreated card isn't moved to "DONE" within 15 days. There's a problem that needs to be elevated higher up the food chain. So the rule might create a new card on ABC's production directors' board to figure out what's happening.
... usecases such as this can be quite many as you can surely imagine and could act upon external events (i.e. according to ABC's bank API, the company received $10.000.000 or more --> create a card: "party hard").
There are some ways to approach this issue:
1. implement a subset of features directly into Wekan
2. bind wekan with another opensource tool (https://github.com/huginn/huginn comes to my mind) and make these two play together smoother then butter (i.e. create rules from within wekan)
Using the 2nd approach make's things a bit more complex and limited by huginn (or similar), but would also help resolve integration tickets such as #1100 . In case someone would pick this up, I'm willing to build a VPS and offer it to the wekan team on which huginn (or anything else) would be preconfigured in case that would be needed/welcomed. | 1.0 | Feature request: if-this-then-that-style support (i.e. recurring cards) - Being able to set rules initiating a specific action (i.e. move a card) in case that something else happens (i.e. if card is past due date) would make Wekan very suitable for several professional use-cases. Implementing this would top existing possibilities that, only Odoo partially covers (and which aren't really present in Trello). Typical use cases:
**Recreate a card yearly on 1st June:** i.e. company ABC needs to perform periodic maintainance on a machine to adhere to standards.
**Send mail if card is moved:** i.e. if the above card is recreated, and the safety manger of ABC moves it from TODO to DOING, send automatic maintenance request to company XYZ and a copy to the safety manager.
**Create a card if some other card past due date:** lets assume the recreated card isn't moved to "DONE" within 15 days. There's a problem that needs to be elevated higher up the food chain. So the rule might create a new card on ABC's production directors' board to figure out what's happening.
... usecases such as this can be quite many as you can surely imagine and could act upon external events (i.e. according to ABC's bank API, the company received $10.000.000 or more --> create a card: "party hard").
There are some ways to approach this issue:
1. implement a subset of features directly into Wekan
2. bind wekan with another opensource tool (https://github.com/huginn/huginn comes to my mind) and make these two play together smoother then butter (i.e. create rules from within wekan)
Using the 2nd approach make's things a bit more complex and limited by huginn (or similar), but would also help resolve integration tickets such as #1100 . In case someone would pick this up, I'm willing to build a VPS and offer it to the wekan team on which huginn (or anything else) would be preconfigured in case that would be needed/welcomed. | non_process | feature request if this then that style support i e recurring cards being able to set rules initiating a specific action i e move a card in case that something else happens i e if card is past due date would make wekan very suitable for several professional use cases implementing this would top existing possibilities that only odoo partially covers and which aren t really present in trello typical use cases recreate a card yearly on june i e company abc needs to perform periodic maintainance on a machine to adhere to standards send mail if card is moved i e if the above card is recreated and the safety manger of abc moves it from todo to doing send automatic maintenance request to company xyz and a copy to the safety manager create a card if some other card past due date lets assume the recreated card isn t moved to done within days there s a problem that needs to be elevated higher up the food chain so the rule might create a new card on abc s production directors board to figure out what s happening usecases such as this can be quite many as you can surely imagine and could act upon external events i e according to abc s bank api the company received or more create a card party hard there are some ways to approach this issue implement a subset of features directly into wekan bind wekan with another opensource tool comes to my mind and make these two play together smoother then butter i e create rules from within wekan using the approach make s things a bit more complex and limited by huginn or similar but would also help resolve integration tickets such as in case someone would pick this up i m willing to build a vps and offer it to the wekan team on which huginn or anything else would be preconfigured in case that would be needed welcomed | 0 |
153,816 | 24,191,750,852 | IssuesEvent | 2022-09-23 18:20:36 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | a11y: textfield with maxLength and obscureText includes hint in its character count | a: text input framework f: material design a: accessibility customer: fun (g3) has reproducible steps P6 found in release: 1.20 | Internal: b/141386361
If you have a TextField with `obscureText: true` and a `maxLength`, the characters from the `hintText` will be counted in the announcements which announce how many characters the text field contains.
Tested only on TalkBack, haven't checked VoiceOver
Repro instructions:
Build and deploy the code below. When you first focus on the field, it reads "Editing, password, password, Edit box". I think it's an error for it to read password twice.
Enter a character into the text field, then screenreader navigate away. When you screenreader navigate back, it will read "Editing, password, 11 characters, Edit box" even though you only entered one character. Delete the character, navigate away, and navigate back. Now it reads "Editing, password, 8 characters, Edit box". Shouldn't that be 0 characters?
```
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'textfield repro',
theme: ThemeData(primarySwatch: Colors.blue),
home: Scaffold(
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('some text'),
TextField(
obscureText: true,
enableInteractiveSelection: true,
maxLength: 5,
decoration: InputDecoration(
hintText: 'password',
),
autocorrect: false,
onChanged: (_) {},
),
],
),
),
);
}
}
``` | 1.0 | a11y: textfield with maxLength and obscureText includes hint in its character count - Internal: b/141386361
If you have a TextField with `obscureText: true` and a `maxLength`, the characters from the `hintText` will be counted in the announcements which announce how many characters the text field contains.
Tested only on TalkBack, haven't checked VoiceOver
Repro instructions:
Build and deploy the code below. When you first focus on the field, it reads "Editing, password, password, Edit box". I think it's an error for it to read password twice.
Enter a character into the text field, then screenreader navigate away. When you screenreader navigate back, it will read "Editing, password, 11 characters, Edit box" even though you only entered one character. Delete the character, navigate away, and navigate back. Now it reads "Editing, password, 8 characters, Edit box". Shouldn't that be 0 characters?
```
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'textfield repro',
theme: ThemeData(primarySwatch: Colors.blue),
home: Scaffold(
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('some text'),
TextField(
obscureText: true,
enableInteractiveSelection: true,
maxLength: 5,
decoration: InputDecoration(
hintText: 'password',
),
autocorrect: false,
onChanged: (_) {},
),
],
),
),
);
}
}
``` | non_process | textfield with maxlength and obscuretext includes hint in its character count internal b if you have a textfield with obscuretext true and a maxlength the characters from the hinttext will be counted in the announcements which announce how many characters the text field contains tested only on talkback haven t checked voiceover repro instructions build and deploy the code below when you first focus on the field it reads editing password password edit box i think it s an error for it to read password twice enter a character into the text field then screenreader navigate away when you screenreader navigate back it will read editing password characters edit box even though you only entered one character delete the character navigate away and navigate back now it reads editing password characters edit box shouldn t that be characters import package flutter material dart void main runapp myapp class myapp extends statelesswidget this widget is the root of your application override widget build buildcontext context return materialapp title textfield repro theme themedata primaryswatch colors blue home scaffold body column mainaxisalignment mainaxisalignment center children text some text textfield obscuretext true enableinteractiveselection true maxlength decoration inputdecoration hinttext password autocorrect false onchanged | 0 |
14,050 | 16,855,186,551 | IssuesEvent | 2021-06-21 05:13:35 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | [Processing] "Warn before executing if parameter CRS's do not match" does not work | Bug Feedback Processing | In the General settings of Processing the `Warn before executing if parameter CRS's do not match` should warn if an algorithm involves many layers and they don't have the same CRS (at least this is my interpretation :) )
But it does not work.
Processing core algorithms handle an automatic on-the-fly conversion but custom scripts or plugin might not and this options is extremely useful.
If the behavior of this option is not the one that I've in mind than sorry for the noise | 1.0 | [Processing] "Warn before executing if parameter CRS's do not match" does not work - In the General settings of Processing the `Warn before executing if parameter CRS's do not match` should warn if an algorithm involves many layers and they don't have the same CRS (at least this is my interpretation :) )
But it does not work.
Processing core algorithms handle an automatic on-the-fly conversion but custom scripts or plugin might not and this options is extremely useful.
If the behavior of this option is not the one that I've in mind than sorry for the noise | process | warn before executing if parameter crs s do not match does not work in the general settings of processing the warn before executing if parameter crs s do not match should warn if an algorithm involves many layers and they don t have the same crs at least this is my interpretation but it does not work processing core algorithms handle an automatic on the fly conversion but custom scripts or plugin might not and this options is extremely useful if the behavior of this option is not the one that i ve in mind than sorry for the noise | 1 |
635,255 | 20,382,797,387 | IssuesEvent | 2022-02-22 01:14:39 | BHoM/BHoM_Engine | https://api.github.com/repos/BHoM/BHoM_Engine | closed | FindFragment doesn't work for interface types | type:bug type:feature priority:low | <!-- PLEASE ENSURE YOU REVIEW THE CONTENT OF EACH ISSUE CAREFULLY, INCLUDING SUBSEQUENT COMMENTS BY YOURSELF OR OTHERS. -->
<!-- IN PARTICULAR PLEASE ENSURE THAT SENSITIVE OR INAPPROPRIATE INFORMATION IS NOT UPLOADED -->
#### Description:
If the type `T` in `FindFragment<T>()` is an interface, FindFragment returns null even if a fragment implementing that interface is in the FragmentSet.
I couldn't find another method that cover that case (e.g. returning all fragment implementing that interface).
| 1.0 | FindFragment doesn't work for interface types - <!-- PLEASE ENSURE YOU REVIEW THE CONTENT OF EACH ISSUE CAREFULLY, INCLUDING SUBSEQUENT COMMENTS BY YOURSELF OR OTHERS. -->
<!-- IN PARTICULAR PLEASE ENSURE THAT SENSITIVE OR INAPPROPRIATE INFORMATION IS NOT UPLOADED -->
#### Description:
If the type `T` in `FindFragment<T>()` is an interface, FindFragment returns null even if a fragment implementing that interface is in the FragmentSet.
I couldn't find another method that cover that case (e.g. returning all fragment implementing that interface).
| non_process | findfragment doesn t work for interface types description if the type t in findfragment is an interface findfragment returns null even if a fragment implementing that interface is in the fragmentset i couldn t find another method that cover that case e g returning all fragment implementing that interface | 0 |
1,302 | 3,151,725,483 | IssuesEvent | 2015-09-16 09:47:49 | well-typed/hackage-security | https://api.github.com/repos/well-typed/hackage-security | opened | Weird transient "invalid hash" error | bug hackage-security | I don't know why this happened, and I cannot reproduce it; but in the course of installing dependencies for some project I got
```
Invalid hash for <repo>/package/string-conversions-0.4.tar.gz
```
halfway the installation. The error did not reappear when re-running `cabal install --only-dependencies` .
| True | Weird transient "invalid hash" error - I don't know why this happened, and I cannot reproduce it; but in the course of installing dependencies for some project I got
```
Invalid hash for <repo>/package/string-conversions-0.4.tar.gz
```
halfway the installation. The error did not reappear when re-running `cabal install --only-dependencies` .
| non_process | weird transient invalid hash error i don t know why this happened and i cannot reproduce it but in the course of installing dependencies for some project i got invalid hash for package string conversions tar gz halfway the installation the error did not reappear when re running cabal install only dependencies | 0 |
20,396 | 27,053,832,199 | IssuesEvent | 2023-02-13 14:57:08 | google/fhir-gateway | https://api.github.com/repos/google/fhir-gateway | closed | Formalize the release process | process P1:must | We need to have a clear recommendation on how to use the access-proxy core for users who develop their own `AccessChecker` plugin. As part of this exercise we should create our own release process, possibly including pushing binary artifacts to a binary repository. The users, should not need to fork the server code. | 1.0 | Formalize the release process - We need to have a clear recommendation on how to use the access-proxy core for users who develop their own `AccessChecker` plugin. As part of this exercise we should create our own release process, possibly including pushing binary artifacts to a binary repository. The users, should not need to fork the server code. | process | formalize the release process we need to have a clear recommendation on how to use the access proxy core for users who develop their own accesschecker plugin as part of this exercise we should create our own release process possibly including pushing binary artifacts to a binary repository the users should not need to fork the server code | 1 |
19,961 | 26,442,004,653 | IssuesEvent | 2023-01-16 01:53:32 | TeamAidemy/ds-paper-summaries | https://api.github.com/repos/TeamAidemy/ds-paper-summaries | opened | Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity | Natural language processing | Lu, Yao, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. “Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity.” In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 8086–98. Dublin, Ireland: Association for Computational Linguistics.
https://aclanthology.org/2022.acl-long.556/
- ACL 2022 の Outstanding Paper
- GPTなどの大規模言語モデルでは、プロンプトに解かせたいタスクの例を数個見せるだけでそのタスクに対応できるようになる "In-context learning" という手法が使える
- しかし、In-context learningではプロンプトに与える例の順番によって、タスク回答の精度が大きく変わってしまうという課題があった
- 本論文は「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。
## Abstract
>When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.
(DeepL翻訳)
ほんの一握りの学習サンプルで呼び出された場合、GPT-3のような非常に大規模で事前学習済みの言語モデルは、完全教師あり、微調整された大規模で事前学習済みの言語モデルと比較して、競争力のある結果を示しています。我々は、サンプルの提供順序によって、最先端技術に近い性能とランダムな推測性能の差が生じることを実証しています。この現象を詳細に分析し、次のことを確認した:モデルサイズに関係なく存在すること(現在の最大モデルでさえ)、サンプルの特定のサブセットに関係しないこと、あるモデルにとって良い順列は他のモデルには移植できないこと。どの順列が良いかを決定するために開発セットを使用することもできますが、これは注釈付きデータを追加する必要があるため、真の少数精鋭の設定から外れてしまいます。その代わりに、我々は言語モデルの生成的性質を利用して人工的な開発セットを構築し、このセット上の順列候補のエントロピー統計に基づき、パフォーマンスの高いプロンプトを特定する。本手法は、11種類の確立されたテキスト分類タスクにおいて、GPTファミリーのモデルに対して13%の相対的な改善をもたらす。
## コード
https://github.com/chicagohai/active-example-selection
## 解決した課題/先行研究との比較
- GPTファミリーに代表されるような大規模な言語モデルにおいては、モデルのパラメータの微調整なしで、プロンプトとして対象タスクの例をいくつか提示すれば新しいタスクに対応できる "In-context Learning" という手法が使える (Brown et al., 2020).
- しかし、プロンプトに提示する例の順番によって、性能が大きく変わってしまうという問題があった。
- 同じデータセットでもこれくらいばらつく↓
[](https://gyazo.com/d98487119ef2e94862580a4c316729c0)
- この課題を解決するために、「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。
## 技術・手法のポイント
- まず、様々なモデルやパラメータ数で実験を行い、何が起こっているのかを探求
- モデルが巨大になってもこの問題は解決しない
- 同じ順番で例を与えても、モデルによって精度が出る場合・出ない場合がある
- モデルが同じでも、パラメータ数が変わると、また精度が変わる
- In-context Learningがうまく行えていないときは、予測ラベルに偏りがあることがわかった (**Fig.6**)
[](https://gyazo.com/48fc61df15b21e764b48ab5856f4312c)
- **この偏りが評価指標と使えるのではないか?** と著者らは考えた。
- この仮説検証のため、以下のアプローチをとった
- (i) 学習例をランダムに選択、これらの順序並べ替え全てを候補プロンプトとして使用。
- (ii)すべての候補プロンプトを使用して言語モデルに in-context learningを行わせる。
- (iii) 以下の評価指標を用いランク付け。最適順序を特定する。
## 評価指標
- Global Entropy
- 極端に偏った予測をする候補プロンプトの識別。
- Local Entropy
- 入力に対しての回答の確度が、全ての入力に対して高すぎると、それはそれで怪しいのでは?という発想。
- 文章の分類タスクでGlobal Entropy基準で選んだ候補プロンプトは平均13%, Local Entropy基準で選んだものは平均9.6%の改善が見られた。
- 異なるモデルや異なるタスクにおいても同様のアプローチを取れば、一貫して改善が見られた。
## 残された課題・議論
- Liu et al., 2020では、プロンプトに与える例を適切に選べば順序は関係ないという結論が出されている。この矛盾(のように見える結論)の解釈は解決されているのだろうか?
## 重要な引用
- 大規模言語モデル
- Radford, Alec, Jeff Wu, Rewon Child, D. Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners.”
- GPT-2
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems.
- GPT-3
- In-context learningについて
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems.
- GPT-3の論文
- プロンプトの順序が精度に与える影響について
- Gao, Tianyu, Adam Fisch, and Danqi Chen. 2020. “Making Pre-Trained Language Models Better Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2012.15723.
- プロンプトに与える例を自動で決める方法
- Liu, Jiachang, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. “What Makes Good In-Context Examples for GPT-3?” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2101.06804.
- この論文での結論は「順序は関係ない」だった。
- プロンプトの設計の工夫で精度をあげようという仕事
- Schick, Timo, and Hinrich Schütze. 2020. “It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2009.07118.
- Gao, Tianyu, Adam Fisch, and Danqi Chen. 2020. “Making Pre-Trained Language Models Better Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2012.15723.
- Shin, Taylor, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. “AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2010.15980.
- Jiang, Zhengbao, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. “How Can We Know What Language Models Know?” Transactions of the Association for Computational Linguistics 8: 423–38. | 1.0 | Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity - Lu, Yao, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. “Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity.” In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 8086–98. Dublin, Ireland: Association for Computational Linguistics.
https://aclanthology.org/2022.acl-long.556/
- ACL 2022 の Outstanding Paper
- GPTなどの大規模言語モデルでは、プロンプトに解かせたいタスクの例を数個見せるだけでそのタスクに対応できるようになる "In-context learning" という手法が使える
- しかし、In-context learningではプロンプトに与える例の順番によって、タスク回答の精度が大きく変わってしまうという課題があった
- 本論文は「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。
## Abstract
>When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.
(DeepL翻訳)
ほんの一握りの学習サンプルで呼び出された場合、GPT-3のような非常に大規模で事前学習済みの言語モデルは、完全教師あり、微調整された大規模で事前学習済みの言語モデルと比較して、競争力のある結果を示しています。我々は、サンプルの提供順序によって、最先端技術に近い性能とランダムな推測性能の差が生じることを実証しています。この現象を詳細に分析し、次のことを確認した:モデルサイズに関係なく存在すること(現在の最大モデルでさえ)、サンプルの特定のサブセットに関係しないこと、あるモデルにとって良い順列は他のモデルには移植できないこと。どの順列が良いかを決定するために開発セットを使用することもできますが、これは注釈付きデータを追加する必要があるため、真の少数精鋭の設定から外れてしまいます。その代わりに、我々は言語モデルの生成的性質を利用して人工的な開発セットを構築し、このセット上の順列候補のエントロピー統計に基づき、パフォーマンスの高いプロンプトを特定する。本手法は、11種類の確立されたテキスト分類タスクにおいて、GPTファミリーのモデルに対して13%の相対的な改善をもたらす。
## コード
https://github.com/chicagohai/active-example-selection
## 解決した課題/先行研究との比較
- GPTファミリーに代表されるような大規模な言語モデルにおいては、モデルのパラメータの微調整なしで、プロンプトとして対象タスクの例をいくつか提示すれば新しいタスクに対応できる "In-context Learning" という手法が使える (Brown et al., 2020).
- しかし、プロンプトに提示する例の順番によって、性能が大きく変わってしまうという問題があった。
- 同じデータセットでもこれくらいばらつく↓
[](https://gyazo.com/d98487119ef2e94862580a4c316729c0)
- この課題を解決するために、「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。
## 技術・手法のポイント
- まず、様々なモデルやパラメータ数で実験を行い、何が起こっているのかを探求
- モデルが巨大になってもこの問題は解決しない
- 同じ順番で例を与えても、モデルによって精度が出る場合・出ない場合がある
- モデルが同じでも、パラメータ数が変わると、また精度が変わる
- In-context Learningがうまく行えていないときは、予測ラベルに偏りがあることがわかった (**Fig.6**)
[](https://gyazo.com/48fc61df15b21e764b48ab5856f4312c)
- **この偏りが評価指標と使えるのではないか?** と著者らは考えた。
- この仮説検証のため、以下のアプローチをとった
- (i) 学習例をランダムに選択、これらの順序並べ替え全てを候補プロンプトとして使用。
- (ii)すべての候補プロンプトを使用して言語モデルに in-context learningを行わせる。
- (iii) 以下の評価指標を用いランク付け。最適順序を特定する。
## 評価指標
- Global Entropy
- 極端に偏った予測をする候補プロンプトの識別。
- Local Entropy
- 入力に対しての回答の確度が、全ての入力に対して高すぎると、それはそれで怪しいのでは?という発想。
- 文章の分類タスクでGlobal Entropy基準で選んだ候補プロンプトは平均13%, Local Entropy基準で選んだものは平均9.6%の改善が見られた。
- 異なるモデルや異なるタスクにおいても同様のアプローチを取れば、一貫して改善が見られた。
## 残された課題・議論
- Liu et al., 2020では、プロンプトに与える例を適切に選べば順序は関係ないという結論が出されている。この矛盾(のように見える結論)の解釈は解決されているのだろうか?
## 重要な引用
- 大規模言語モデル
- Radford, Alec, Jeff Wu, Rewon Child, D. Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners.”
- GPT-2
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems.
- GPT-3
- In-context learningについて
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems.
- GPT-3の論文
- プロンプトの順序が精度に与える影響について
- Gao, Tianyu, Adam Fisch, and Danqi Chen. 2020. “Making Pre-Trained Language Models Better Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2012.15723.
- プロンプトに与える例を自動で決める方法
- Liu, Jiachang, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. “What Makes Good In-Context Examples for GPT-3?” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2101.06804.
- この論文での結論は「順序は関係ない」だった。
- プロンプトの設計の工夫で精度をあげようという仕事
- Schick, Timo, and Hinrich Schütze. 2020. “It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2009.07118.
- Gao, Tianyu, Adam Fisch, and Danqi Chen. 2020. “Making Pre-Trained Language Models Better Few-Shot Learners.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2012.15723.
- Shin, Taylor, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. “AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2010.15980.
- Jiang, Zhengbao, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. “How Can We Know What Language Models Know?” Transactions of the Association for Computational Linguistics 8: 423–38. | process | fantastically ordered prompts and where to find them overcoming few shot prompt order sensitivity lu yao max bartolo alastair moore sebastian riedel and pontus stenetorp “fantastically ordered prompts and where to find them overcoming few shot prompt order sensitivity ” in proceedings of the annual meeting of the association for computational linguistics volume long papers – dublin ireland association for computational linguistics acl の outstanding paper gptなどの大規模言語モデルでは、プロンプトに解かせたいタスクの例を数個見せるだけでそのタスクに対応できるようになる in context learning という手法が使える しかし、in context learningではプロンプトに与える例の順番によって、タスク回答の精度が大きく変わってしまうという課題があった 本論文は「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。 abstract when primed with only a handful of training samples very large pretrained language models such as gpt have shown competitive results when compared to fully supervised fine tuned large pretrained language models we demonstrate that the order in which the samples are provided can make the difference between near state of the art and random guess performance essentially some permutations are “fantastic” and some not we analyse this phenomenon in detail establishing that it is present across model sizes even for the largest current models it is not related to a specific subset of samples and that a given good permutation for one model is not transferable to another while one could use a development set to determine which permutations are performant this would deviate from the true few shot setting as it requires additional annotated data instead we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set we identify performant prompts our method yields a relative improvement for gpt family models across eleven different established text classification tasks deepl翻訳 ほんの一握りの学習サンプルで呼び出された場合、gpt 、完全教師あり、微調整された大規模で事前学習済みの言語モデルと比較して、競争力のある結果を示しています。我々は、サンプルの提供順序によって、最先端技術に近い性能とランダムな推測性能の差が生じることを実証しています。この現象を詳細に分析し、次のことを確認した:モデルサイズに関係なく存在すること(現在の最大モデルでさえ)、サンプルの特定のサブセットに関係しないこと、あるモデルにとって良い順列は他のモデルには移植できないこと。どの順列が良いかを決定するために開発セットを使用することもできますが、これは注釈付きデータを追加する必要があるため、真の少数精鋭の設定から外れてしまいます。その代わりに、我々は言語モデルの生成的性質を利用して人工的な開発セットを構築し、このセット上の順列候補のエントロピー統計に基づき、パフォーマンスの高いプロンプトを特定する。本手法は、 、 の相対的な改善をもたらす。 コード 解決した課題 先行研究との比較 gptファミリーに代表されるような大規模な言語モデルにおいては、モデルのパラメータの微調整なしで、プロンプトとして対象タスクの例をいくつか提示すれば新しいタスクに対応できる in context learning という手法が使える brown et al しかし、プロンプトに提示する例の順番によって、性能が大きく変わってしまうという問題があった。 同じデータセットでもこれくらいばらつく↓ この課題を解決するために、「追加のラベルなしに」「自動的に」例提示の順番を決める手法を提案。 技術・手法のポイント まず、様々なモデルやパラメータ数で実験を行い、何が起こっているのかを探求 モデルが巨大になってもこの問題は解決しない 同じ順番で例を与えても、モデルによって精度が出る場合・出ない場合がある モデルが同じでも、パラメータ数が変わると、また精度が変わる in context learningがうまく行えていないときは、予測ラベルに偏りがあることがわかった fig この偏りが評価指標と使えるのではないか? と著者らは考えた。 この仮説検証のため、以下のアプローチをとった i 学習例をランダムに選択、これらの順序並べ替え全てを候補プロンプトとして使用。 ii すべての候補プロンプトを使用して言語モデルに in context learningを行わせる。 iii 以下の評価指標を用いランク付け。最適順序を特定する。 評価指標 global entropy 極端に偏った予測をする候補プロンプトの識別。 local entropy 入力に対しての回答の確度が、全ての入力に対して高すぎると、それはそれで怪しいのでは?という発想。 文章の分類タスクでglobal local の改善が見られた。 異なるモデルや異なるタスクにおいても同様のアプローチを取れば、一貫して改善が見られた。 残された課題・議論 liu et al 、プロンプトに与える例を適切に選べば順序は関係ないという結論が出されている。この矛盾(のように見える結論)の解釈は解決されているのだろうか? 重要な引用 大規模言語モデル radford alec jeff wu rewon child d luan dario amodei and ilya sutskever “language models are unsupervised multitask learners ” gpt brown tom b benjamin mann nick ryder melanie subbiah jared kaplan prafulla dhariwal arvind neelakantan et al “language models are few shot learners ” conference on neural information processing systems gpt in context learningについて brown tom b benjamin mann nick ryder melanie subbiah jared kaplan prafulla dhariwal arvind neelakantan et al “language models are few shot learners ” conference on neural information processing systems gpt プロンプトの順序が精度に与える影響について gao tianyu adam fisch and danqi chen “making pre trained language models better few shot learners ” arxiv arxiv プロンプトに与える例を自動で決める方法 liu jiachang dinghan shen yizhe zhang bill dolan lawrence carin and weizhu chen “what makes good in context examples for gpt ” arxiv arxiv この論文での結論は「順序は関係ない」だった。 プロンプトの設計の工夫で精度をあげようという仕事 schick timo and hinrich schütze “it’s not just size that matters small language models are also few shot learners ” arxiv arxiv gao tianyu adam fisch and danqi chen “making pre trained language models better few shot learners ” arxiv arxiv shin taylor yasaman razeghi robert l logan iv eric wallace and sameer singh “autoprompt eliciting knowledge from language models with automatically generated prompts ” arxiv arxiv jiang zhengbao frank f xu jun araki and graham neubig “how can we know what language models know ” transactions of the association for computational linguistics – | 1 |
5,973 | 8,793,118,676 | IssuesEvent | 2018-12-21 18:38:51 | zammad/zammad | https://api.github.com/repos/zammad/zammad | opened | html_sanitizer goes into loop for specific content | bug mail processing verified | <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* When fetching E-Mails -even if the E-Mail-Content is causing trouble- Zammad will at some point run into an processing timeout on that Mail, write it to the unprocessable_mail directory and continue working on other E-Mails in order to keep the service working proberly.
### Actual behavior:
* When fetching E-Mails, sometimes specific E-Mails may cause an endless loop, e.g. in the html_sanitizing process, which causes the scheduler to keep on working on one E-Mail.
### Steps to reproduce the behavior:
* have a special E-Mail that causes an endless loop e.g. the html_sanitizer altering the content
Yes I'm sure this is a bug and no feature request or a general question. | 1.0 | html_sanitizer goes into loop for specific content - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* When fetching E-Mails -even if the E-Mail-Content is causing trouble- Zammad will at some point run into an processing timeout on that Mail, write it to the unprocessable_mail directory and continue working on other E-Mails in order to keep the service working proberly.
### Actual behavior:
* When fetching E-Mails, sometimes specific E-Mails may cause an endless loop, e.g. in the html_sanitizing process, which causes the scheduler to keep on working on one E-Mail.
### Steps to reproduce the behavior:
* have a special E-Mail that causes an endless loop e.g. the html_sanitizer altering the content
Yes I'm sure this is a bug and no feature request or a general question. | process | html sanitizer goes into loop for specific content hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any expected behavior when fetching e mails even if the e mail content is causing trouble zammad will at some point run into an processing timeout on that mail write it to the unprocessable mail directory and continue working on other e mails in order to keep the service working proberly actual behavior when fetching e mails sometimes specific e mails may cause an endless loop e g in the html sanitizing process which causes the scheduler to keep on working on one e mail steps to reproduce the behavior have a special e mail that causes an endless loop e g the html sanitizer altering the content yes i m sure this is a bug and no feature request or a general question | 1 |
8,428 | 11,594,494,015 | IssuesEvent | 2020-02-24 15:25:34 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Obsolete 'GO:0052195 movement on or near other organism involved in symbiotic interaction' and children | multi-species process obsoletion | I will remove ' on or near other organism' from the following terms:
GO:0052195 movement on or near other organism involved in symbiotic interaction'
GO:0052127 movement on or near host
GO:0075230 spore movement on or near host
GO:0075231 modulation of spore movement on or near host
GO:0075233 negative regulation of spore movement on or near host
GO:0075232 positive regulation of spore movement on or near host
GO:0075234 zoospore movement on or near host
GO:0075235 modulation of zoospore movement on or near host
GO:0075237 negative regulation of zoospore movement on or near host
GO:0075236 positive regulation of zoospore movement on or near host | 1.0 | Obsolete 'GO:0052195 movement on or near other organism involved in symbiotic interaction' and children - I will remove ' on or near other organism' from the following terms:
GO:0052195 movement on or near other organism involved in symbiotic interaction'
GO:0052127 movement on or near host
GO:0075230 spore movement on or near host
GO:0075231 modulation of spore movement on or near host
GO:0075233 negative regulation of spore movement on or near host
GO:0075232 positive regulation of spore movement on or near host
GO:0075234 zoospore movement on or near host
GO:0075235 modulation of zoospore movement on or near host
GO:0075237 negative regulation of zoospore movement on or near host
GO:0075236 positive regulation of zoospore movement on or near host | process | obsolete go movement on or near other organism involved in symbiotic interaction and children i will remove on or near other organism from the following terms go movement on or near other organism involved in symbiotic interaction go movement on or near host go spore movement on or near host go modulation of spore movement on or near host go negative regulation of spore movement on or near host go positive regulation of spore movement on or near host go zoospore movement on or near host go modulation of zoospore movement on or near host go negative regulation of zoospore movement on or near host go positive regulation of zoospore movement on or near host | 1 |
2,409 | 5,195,610,681 | IssuesEvent | 2017-01-23 09:59:50 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | closed | Handle clinicaltrials.gov's gender "All" | 4. Ready for Review bug Processors | ClinicalTrials.gov used to use the term `Both` when a trial recruited participants of both genders, but they changed it to `All`. We have to update our processor accordingly. | 1.0 | Handle clinicaltrials.gov's gender "All" - ClinicalTrials.gov used to use the term `Both` when a trial recruited participants of both genders, but they changed it to `All`. We have to update our processor accordingly. | process | handle clinicaltrials gov s gender all clinicaltrials gov used to use the term both when a trial recruited participants of both genders but they changed it to all we have to update our processor accordingly | 1 |
12,327 | 14,589,626,142 | IssuesEvent | 2020-12-19 02:58:04 | pods-framework/pods | https://api.github.com/repos/pods-framework/pods | closed | Multi-Site Media Compatibility | Focus: Other Compatibility Keyword: Puntable | ## Issue Overview
We are using Pods custom post types on a WordPress Multi-Site Network. We are also using a plugin called [Network Media Library](https://github.com/humanmade/network-media-library) that forces all media uploaded from any site to be stored under the main site (blog ID 1). All sites can still access this media, it's just stored in the main site folder and recorded in the main site wp_posts table.
If I am editing a custom Pod post on a site other than the main site (Blog 2 for example), and I use a media field to add a photo to the post, I get a "Invalid Post ID" error after trying to save the custom post. All the information saves besides the media, and the error does not occur if no media is selected - thus the media is the issue.
My assumption is that the Pod plugin is trying to look up the Post ID of the image in the Blog 2 posts table, but the Post ID does not exists there - it exists in the main site Posts table.
I understand why the plugin does this and it makes sense in 99% of cases. I'm looking for advice on how I may alter the plugin to force media fields to point to blog ID 1.
## Expected Behavior
Image record is stored in wp_posts (Blog 1 table)
Create new Pod post in Blog 2
Add photo to Pod post in Blog 2 (photo is stored in Blog 1)
Photo is requested from wp_posts (Blog 1 table)
Post is successfully saved, no "Invalid Post ID" error occurs
## Current Behavior
Image record is stored in wp_posts (Blog 1 table)
Create new Pod post in Blog 2
Add photo to Pod post in Blog 2 (photo is stored in Blog 1)
Photo is requested from **wp_2_posts (Blog 2 table)**
Photo post ID does not exist in wp_2_posts
"Invalid Post ID" error occurs
## Steps to Reproduce (for bugs)
1. Create new Multi-Site network
2. Install Network Media Plugin
3. Install Pod plugin
4. Configure Network Media Plugin to force all media storage to Blog ID 1
5. Create a new blog on the network (Blog ID 2)
6. Create a custom Pod post type with Media Input (on blog 2)
7. Attempt to create and save new custom Pod post on Blog 2, using media/photo that is stored on Blog 1
8. Upon clicking save, should be redirected to error page that says "Invalid Post ID"
## Possible Solution
It's such a specific use case that it probably doesn't need to be accommodated in the plugin.
I would greatly appreciate advice on how I can personally modify the plugin code to force media to always be requested from the Main blog (Blog ID 1). Something like switch_to_blog(1). Just need to know where to implement that in plugin code.
| True | Multi-Site Media Compatibility - ## Issue Overview
We are using Pods custom post types on a WordPress Multi-Site Network. We are also using a plugin called [Network Media Library](https://github.com/humanmade/network-media-library) that forces all media uploaded from any site to be stored under the main site (blog ID 1). All sites can still access this media, it's just stored in the main site folder and recorded in the main site wp_posts table.
If I am editing a custom Pod post on a site other than the main site (Blog 2 for example), and I use a media field to add a photo to the post, I get a "Invalid Post ID" error after trying to save the custom post. All the information saves besides the media, and the error does not occur if no media is selected - thus the media is the issue.
My assumption is that the Pod plugin is trying to look up the Post ID of the image in the Blog 2 posts table, but the Post ID does not exists there - it exists in the main site Posts table.
I understand why the plugin does this and it makes sense in 99% of cases. I'm looking for advice on how I may alter the plugin to force media fields to point to blog ID 1.
## Expected Behavior
Image record is stored in wp_posts (Blog 1 table)
Create new Pod post in Blog 2
Add photo to Pod post in Blog 2 (photo is stored in Blog 1)
Photo is requested from wp_posts (Blog 1 table)
Post is successfully saved, no "Invalid Post ID" error occurs
## Current Behavior
Image record is stored in wp_posts (Blog 1 table)
Create new Pod post in Blog 2
Add photo to Pod post in Blog 2 (photo is stored in Blog 1)
Photo is requested from **wp_2_posts (Blog 2 table)**
Photo post ID does not exist in wp_2_posts
"Invalid Post ID" error occurs
## Steps to Reproduce (for bugs)
1. Create new Multi-Site network
2. Install Network Media Plugin
3. Install Pod plugin
4. Configure Network Media Plugin to force all media storage to Blog ID 1
5. Create a new blog on the network (Blog ID 2)
6. Create a custom Pod post type with Media Input (on blog 2)
7. Attempt to create and save new custom Pod post on Blog 2, using media/photo that is stored on Blog 1
8. Upon clicking save, should be redirected to error page that says "Invalid Post ID"
## Possible Solution
It's such a specific use case that it probably doesn't need to be accommodated in the plugin.
I would greatly appreciate advice on how I can personally modify the plugin code to force media to always be requested from the Main blog (Blog ID 1). Something like switch_to_blog(1). Just need to know where to implement that in plugin code.
| non_process | multi site media compatibility issue overview we are using pods custom post types on a wordpress multi site network we are also using a plugin called that forces all media uploaded from any site to be stored under the main site blog id all sites can still access this media it s just stored in the main site folder and recorded in the main site wp posts table if i am editing a custom pod post on a site other than the main site blog for example and i use a media field to add a photo to the post i get a invalid post id error after trying to save the custom post all the information saves besides the media and the error does not occur if no media is selected thus the media is the issue my assumption is that the pod plugin is trying to look up the post id of the image in the blog posts table but the post id does not exists there it exists in the main site posts table i understand why the plugin does this and it makes sense in of cases i m looking for advice on how i may alter the plugin to force media fields to point to blog id expected behavior image record is stored in wp posts blog table create new pod post in blog add photo to pod post in blog photo is stored in blog photo is requested from wp posts blog table post is successfully saved no invalid post id error occurs current behavior image record is stored in wp posts blog table create new pod post in blog add photo to pod post in blog photo is stored in blog photo is requested from wp posts blog table photo post id does not exist in wp posts invalid post id error occurs steps to reproduce for bugs create new multi site network install network media plugin install pod plugin configure network media plugin to force all media storage to blog id create a new blog on the network blog id create a custom pod post type with media input on blog attempt to create and save new custom pod post on blog using media photo that is stored on blog upon clicking save should be redirected to error page that says invalid post id possible solution it s such a specific use case that it probably doesn t need to be accommodated in the plugin i would greatly appreciate advice on how i can personally modify the plugin code to force media to always be requested from the main blog blog id something like switch to blog just need to know where to implement that in plugin code | 0 |
19,127 | 25,183,386,363 | IssuesEvent | 2022-11-11 15:37:33 | opensearch-project/data-prepper | https://api.github.com/repos/opensearch-project/data-prepper | opened | Extract values from Grok with the correct type | enhancement plugin - processor | **Is your feature request related to a problem? Please describe.**
The `grok` processor currently creates all Event values as strings. For example, when grokking on an Apache HTTP log, all `response` values are strings. This prevents a pipeline author from creating conditional routing expressions which perform comparisons such as `/response < 500`.
**Describe the solution you'd like**
The `grok` processor can have two options to help pipeline authors.
1. Manual configuration of pattern types.
2. Automatic conversion of pattern types for pre-defined patterns.
*Manual configuration*
Provide a configuration that allows the `grok` processor to convert specific patterns. This new configuration - `conversions` - would take a map of patterns to destination types.
For example:
```
grok:
conversions:
INT: integer
NUMBER: decimal
MY_CUSTOM_NUMBER: integer
```
*Automatic configuration*
Provide a setting that allows the `grok` processor to automatically convert specific patterns which it has pre-included. The `grok` processor has some default patterns like `INT`. Most pipeline authors probably want these to automatically get the correct type. The `grok` processor can automatically convert these known patterns.
This would be a change of behavior. So, I propose that this configure be disabled by default, but in a future major version we would enable it.
Thus, to use it in Data Prepper 2.0.
```
grok:
disable_automatic_conversion: false
```
But, perhaps in Data Prepper 3.0, the default value here becomes `false`. So pipeline authors no longer have to specify it.
**Describe alternatives you've considered (Optional)**
Ask pipeline authors to use a casting processor. I think this is still valuable and will create an issue for it. But, it would be nice to have this defined in grok.
| 1.0 | Extract values from Grok with the correct type - **Is your feature request related to a problem? Please describe.**
The `grok` processor currently creates all Event values as strings. For example, when grokking on an Apache HTTP log, all `response` values are strings. This prevents a pipeline author from creating conditional routing expressions which perform comparisons such as `/response < 500`.
**Describe the solution you'd like**
The `grok` processor can have two options to help pipeline authors.
1. Manual configuration of pattern types.
2. Automatic conversion of pattern types for pre-defined patterns.
*Manual configuration*
Provide a configuration that allows the `grok` processor to convert specific patterns. This new configuration - `conversions` - would take a map of patterns to destination types.
For example:
```
grok:
conversions:
INT: integer
NUMBER: decimal
MY_CUSTOM_NUMBER: integer
```
*Automatic configuration*
Provide a setting that allows the `grok` processor to automatically convert specific patterns which it has pre-included. The `grok` processor has some default patterns like `INT`. Most pipeline authors probably want these to automatically get the correct type. The `grok` processor can automatically convert these known patterns.
This would be a change of behavior. So, I propose that this configure be disabled by default, but in a future major version we would enable it.
Thus, to use it in Data Prepper 2.0.
```
grok:
disable_automatic_conversion: false
```
But, perhaps in Data Prepper 3.0, the default value here becomes `false`. So pipeline authors no longer have to specify it.
**Describe alternatives you've considered (Optional)**
Ask pipeline authors to use a casting processor. I think this is still valuable and will create an issue for it. But, it would be nice to have this defined in grok.
| process | extract values from grok with the correct type is your feature request related to a problem please describe the grok processor currently creates all event values as strings for example when grokking on an apache http log all response values are strings this prevents a pipeline author from creating conditional routing expressions which perform comparisons such as response describe the solution you d like the grok processor can have two options to help pipeline authors manual configuration of pattern types automatic conversion of pattern types for pre defined patterns manual configuration provide a configuration that allows the grok processor to convert specific patterns this new configuration conversions would take a map of patterns to destination types for example grok conversions int integer number decimal my custom number integer automatic configuration provide a setting that allows the grok processor to automatically convert specific patterns which it has pre included the grok processor has some default patterns like int most pipeline authors probably want these to automatically get the correct type the grok processor can automatically convert these known patterns this would be a change of behavior so i propose that this configure be disabled by default but in a future major version we would enable it thus to use it in data prepper grok disable automatic conversion false but perhaps in data prepper the default value here becomes false so pipeline authors no longer have to specify it describe alternatives you ve considered optional ask pipeline authors to use a casting processor i think this is still valuable and will create an issue for it but it would be nice to have this defined in grok | 1 |
7,792 | 10,948,796,662 | IssuesEvent | 2019-11-26 09:36:34 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | opened | Mask_landsea unable to find fx files at diagnostic level | preprocessor | **Describe the bug**
The mask_landsea routine expects the name of the fx files to start by the `var_name` (sftlf_fx_ ... .nc). This is because of the way the keys are defined in a dictionary inside the routine.
If the masking is performed at the diagnostic level, it is not able to find the fx files because they get renamed during the preprocessing according to the output_file convention in the config-developer ( project_... .nc). So it always ends up applying the Natural Earth files.
As a reference, in https://github.com/ESMValGroup/ESMValTool/pull/1288/ , I currently have to link the fx files so that the name starts by the var_name in order to be able to apply the mask. But that workaround looks a bit strange. Could the routine be generalised to accept the fx files even after they get renamed?
| 1.0 | Mask_landsea unable to find fx files at diagnostic level - **Describe the bug**
The mask_landsea routine expects the name of the fx files to start by the `var_name` (sftlf_fx_ ... .nc). This is because of the way the keys are defined in a dictionary inside the routine.
If the masking is performed at the diagnostic level, it is not able to find the fx files because they get renamed during the preprocessing according to the output_file convention in the config-developer ( project_... .nc). So it always ends up applying the Natural Earth files.
As a reference, in https://github.com/ESMValGroup/ESMValTool/pull/1288/ , I currently have to link the fx files so that the name starts by the var_name in order to be able to apply the mask. But that workaround looks a bit strange. Could the routine be generalised to accept the fx files even after they get renamed?
| process | mask landsea unable to find fx files at diagnostic level describe the bug the mask landsea routine expects the name of the fx files to start by the var name sftlf fx nc this is because of the way the keys are defined in a dictionary inside the routine if the masking is performed at the diagnostic level it is not able to find the fx files because they get renamed during the preprocessing according to the output file convention in the config developer project nc so it always ends up applying the natural earth files as a reference in i currently have to link the fx files so that the name starts by the var name in order to be able to apply the mask but that workaround looks a bit strange could the routine be generalised to accept the fx files even after they get renamed | 1 |
2,831 | 5,785,831,221 | IssuesEvent | 2017-05-01 06:44:51 | AllenFang/react-bootstrap-table | https://api.github.com/repos/AllenFang/react-bootstrap-table | closed | DefaultValue is set to '' when the value is false (for custom editor) | enhancement inprocess | I have a custom editor - the values it accepts are boolean - true/false.
In TableEditColumn.js, line #176, there is this line
```javasctipt
defaultValue: fieldValue || '',
```
When the fieldValue happens to be false, defaultValue will get set to ''.
This line should probably be
```javascript
defaultValue : fieldValue === undefined ? '' : fieldValue
``` | 1.0 | DefaultValue is set to '' when the value is false (for custom editor) - I have a custom editor - the values it accepts are boolean - true/false.
In TableEditColumn.js, line #176, there is this line
```javasctipt
defaultValue: fieldValue || '',
```
When the fieldValue happens to be false, defaultValue will get set to ''.
This line should probably be
```javascript
defaultValue : fieldValue === undefined ? '' : fieldValue
``` | process | defaultvalue is set to when the value is false for custom editor i have a custom editor the values it accepts are boolean true false in tableeditcolumn js line there is this line javasctipt defaultvalue fieldvalue when the fieldvalue happens to be false defaultvalue will get set to this line should probably be javascript defaultvalue fieldvalue undefined fieldvalue | 1 |
414,684 | 27,999,092,181 | IssuesEvent | 2023-03-27 10:27:09 | apollographql/router | https://api.github.com/repos/apollographql/router | opened | Coprocessor: subgraph stage: document the need for an `all` key | documentation | [The Coprocessor subgraph stage documentation is off](https://www.apollographql.com/docs/router/customizations/coprocessor/#typical-configuration), the subgraph request and response stage keys differ from the router stage's, and it needs to be under the `all` key. eg:
```yaml
coprocessor:
subgraph:
request:
headers: true
body: true
response:
headers: true
body: true
```
needs to be
```yaml
coprocessor:
subgraph:
all:
request:
headers: true
body: true
response:
headers: true
body: true
``` | 1.0 | Coprocessor: subgraph stage: document the need for an `all` key - [The Coprocessor subgraph stage documentation is off](https://www.apollographql.com/docs/router/customizations/coprocessor/#typical-configuration), the subgraph request and response stage keys differ from the router stage's, and it needs to be under the `all` key. eg:
```yaml
coprocessor:
subgraph:
request:
headers: true
body: true
response:
headers: true
body: true
```
needs to be
```yaml
coprocessor:
subgraph:
all:
request:
headers: true
body: true
response:
headers: true
body: true
``` | non_process | coprocessor subgraph stage document the need for an all key the subgraph request and response stage keys differ from the router stage s and it needs to be under the all key eg yaml coprocessor subgraph request headers true body true response headers true body true needs to be yaml coprocessor subgraph all request headers true body true response headers true body true | 0 |
3,662 | 6,694,649,054 | IssuesEvent | 2017-10-10 03:25:58 | york-region-tpss/stp | https://api.github.com/repos/york-region-tpss/stp | opened | Watering Assignment - Restore Previous Comments and On-hold Numbers | enhancement process workflow | For a new assignment, pull default number from the latest assignment for on-holds and comments, location notes. | 1.0 | Watering Assignment - Restore Previous Comments and On-hold Numbers - For a new assignment, pull default number from the latest assignment for on-holds and comments, location notes. | process | watering assignment restore previous comments and on hold numbers for a new assignment pull default number from the latest assignment for on holds and comments location notes | 1 |
226,779 | 7,522,981,341 | IssuesEvent | 2018-04-12 22:27:54 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Update vic-machine inspect to support VM-Host Affinity | area/apis area/cli component/vic-machine kind/feature priority/p2 team/lifecycle | The inspect CLI (including inspect config) and API should be updated to return the configured DRS VM Group.
~This work should include returning the name of the configured DRS VM Group, even when that name is automatically determined (i.e., between the completion of #7559 and #7567), to minimize churn.~ Holding off on this, in case #7567 never happens.
This should include end-to-end testing of both the CLI and API, to ensure that the correct value is returned when a group is configured and a sane response is returned when no group is configured. | 1.0 | Update vic-machine inspect to support VM-Host Affinity - The inspect CLI (including inspect config) and API should be updated to return the configured DRS VM Group.
~This work should include returning the name of the configured DRS VM Group, even when that name is automatically determined (i.e., between the completion of #7559 and #7567), to minimize churn.~ Holding off on this, in case #7567 never happens.
This should include end-to-end testing of both the CLI and API, to ensure that the correct value is returned when a group is configured and a sane response is returned when no group is configured. | non_process | update vic machine inspect to support vm host affinity the inspect cli including inspect config and api should be updated to return the configured drs vm group this work should include returning the name of the configured drs vm group even when that name is automatically determined i e between the completion of and to minimize churn holding off on this in case never happens this should include end to end testing of both the cli and api to ensure that the correct value is returned when a group is configured and a sane response is returned when no group is configured | 0 |
99,934 | 4,074,601,386 | IssuesEvent | 2016-05-28 15:26:11 | chartjs/Chart.js | https://api.github.com/repos/chartjs/Chart.js | closed | Calling .destroy() on a animated graph does not stop the graph animation | Category: Bug Help wanted Priority: p1 Version: 2.x | I receive datasets asynchronously and plot them as soon as I can, thus destroying and creating a new chart each time. This lead me to discover that destroying a Chart object through .destroy() does not seem to prevent the animation manager from continuing to animate it.
Here is a [JSBin example](http://jsbin.com/piviri/edit?html,js,output) where the animation is set to last for 8 seconds but the graph gets destroyed after only one. As a visual aid, the page's colour is changed when .destroy() is called. | 1.0 | Calling .destroy() on a animated graph does not stop the graph animation - I receive datasets asynchronously and plot them as soon as I can, thus destroying and creating a new chart each time. This lead me to discover that destroying a Chart object through .destroy() does not seem to prevent the animation manager from continuing to animate it.
Here is a [JSBin example](http://jsbin.com/piviri/edit?html,js,output) where the animation is set to last for 8 seconds but the graph gets destroyed after only one. As a visual aid, the page's colour is changed when .destroy() is called. | non_process | calling destroy on a animated graph does not stop the graph animation i receive datasets asynchronously and plot them as soon as i can thus destroying and creating a new chart each time this lead me to discover that destroying a chart object through destroy does not seem to prevent the animation manager from continuing to animate it here is a where the animation is set to last for seconds but the graph gets destroyed after only one as a visual aid the page s colour is changed when destroy is called | 0 |
3,333 | 6,459,671,118 | IssuesEvent | 2017-08-16 00:36:15 | w3c/payment-method-id | https://api.github.com/repos/w3c/payment-method-id | closed | CR checklist | Process aid | For all Transition Requests, to advance a specification to a new maturity level other than Note, the Working Group:
* [x] must [record the group's decision to request advancement](https://lists.w3.org/Archives/Public/public-payments-wg/2017Jul/0020.html).
* [ ] must obtain Director approval.
* [x] must provide public documentation of all substantive [changes to the technical report](https://github.com/w3c/webpayments-method-identifiers/compare/gh-pages@%7B2016-05-31%7D...gh-pages) since the previous publication.
* [x] must formally address [all issues](https://github.com/w3c/webpayments-method-identifiers/issues?q=is%3Aissue+is%3Aclosed) raised about the document since the previous maturity level.
* [x] must provide public documentation of any Formal Objections (**None**)
* [x] should provide public documentation of [changes](https://github.com/w3c/webpayments-method-identifiers/compare/gh-pages@%7B2016-05-31%7D...gh-pages) that are not substantive.
* [x] should report which, if any, of the Working Group's requirements for this document have changed since the previous step. (**No requirements have changed**)
* [x] should report any changes in dependencies with other groups. (**No dependencies have changed**)
* [x] should provide information about implementations known to the Working Group.
* Chrome (Android and beginning on desktop), Edge, Facebook, Samsung Internet Browser
* See also [Caniuse on Payment Request API](https://caniuse.com/#feat=payment-request)
To publish a Candidate recommendation, in addition to meeting the general requirements for advancement a Working Group:
* [x] must show that the specification has met all Working Group requirements, or explain why the requirements have changed or been deferred, (*No requirements have changed*)
* [Key requirements](https://www.w3.org/2016/09/20-wpwg-minutes#item04) have been met: distributed naming possible (constrained URL syntax), use of URLs to fetch manifest files, small set of standardized short strings for convenience.
* [x] must document changes to dependencies during the development of the specification, . (**No dependencies have changed**)
* [x] must [document](https://w3c.github.io/payment-method-id/reports/implementation.html) how adequate implementation experience will be demonstrated,
* [x] must specify the deadline for comments, which must be at least four weeks after publication, and should be longer for complex documents, (**2017-10-31**)
* [x] must show that the specification has received wide review, and
* Reviews by [TAG](https://github.com/w3ctag/design-reviews/issues/152), [PING](https://lists.w3.org/Archives/Public/public-payments-wg/2017Mar/0089.html), [I18N Core](https://www.w3.org/Search/Mail/Public/search?keywords=i18n&hdr-1-name=subject&hdr-1-query=&index-grp=Public_FULL&index-type=t&type-index=public-payments-wg), and input from W3C management
* [x] may identify features in the document as "at risk". These features may be removed before advancement to Proposed Recommendation without a requirement to publish a new Candidate Recommendation. (**No features at risk**) | 1.0 | CR checklist - For all Transition Requests, to advance a specification to a new maturity level other than Note, the Working Group:
* [x] must [record the group's decision to request advancement](https://lists.w3.org/Archives/Public/public-payments-wg/2017Jul/0020.html).
* [ ] must obtain Director approval.
* [x] must provide public documentation of all substantive [changes to the technical report](https://github.com/w3c/webpayments-method-identifiers/compare/gh-pages@%7B2016-05-31%7D...gh-pages) since the previous publication.
* [x] must formally address [all issues](https://github.com/w3c/webpayments-method-identifiers/issues?q=is%3Aissue+is%3Aclosed) raised about the document since the previous maturity level.
* [x] must provide public documentation of any Formal Objections (**None**)
* [x] should provide public documentation of [changes](https://github.com/w3c/webpayments-method-identifiers/compare/gh-pages@%7B2016-05-31%7D...gh-pages) that are not substantive.
* [x] should report which, if any, of the Working Group's requirements for this document have changed since the previous step. (**No requirements have changed**)
* [x] should report any changes in dependencies with other groups. (**No dependencies have changed**)
* [x] should provide information about implementations known to the Working Group.
* Chrome (Android and beginning on desktop), Edge, Facebook, Samsung Internet Browser
* See also [Caniuse on Payment Request API](https://caniuse.com/#feat=payment-request)
To publish a Candidate recommendation, in addition to meeting the general requirements for advancement a Working Group:
* [x] must show that the specification has met all Working Group requirements, or explain why the requirements have changed or been deferred, (*No requirements have changed*)
* [Key requirements](https://www.w3.org/2016/09/20-wpwg-minutes#item04) have been met: distributed naming possible (constrained URL syntax), use of URLs to fetch manifest files, small set of standardized short strings for convenience.
* [x] must document changes to dependencies during the development of the specification, . (**No dependencies have changed**)
* [x] must [document](https://w3c.github.io/payment-method-id/reports/implementation.html) how adequate implementation experience will be demonstrated,
* [x] must specify the deadline for comments, which must be at least four weeks after publication, and should be longer for complex documents, (**2017-10-31**)
* [x] must show that the specification has received wide review, and
* Reviews by [TAG](https://github.com/w3ctag/design-reviews/issues/152), [PING](https://lists.w3.org/Archives/Public/public-payments-wg/2017Mar/0089.html), [I18N Core](https://www.w3.org/Search/Mail/Public/search?keywords=i18n&hdr-1-name=subject&hdr-1-query=&index-grp=Public_FULL&index-type=t&type-index=public-payments-wg), and input from W3C management
* [x] may identify features in the document as "at risk". These features may be removed before advancement to Proposed Recommendation without a requirement to publish a new Candidate Recommendation. (**No features at risk**) | process | cr checklist for all transition requests to advance a specification to a new maturity level other than note the working group must must obtain director approval must provide public documentation of all substantive since the previous publication must formally address raised about the document since the previous maturity level must provide public documentation of any formal objections none should provide public documentation of that are not substantive should report which if any of the working group s requirements for this document have changed since the previous step no requirements have changed should report any changes in dependencies with other groups no dependencies have changed should provide information about implementations known to the working group chrome android and beginning on desktop edge facebook samsung internet browser see also to publish a candidate recommendation in addition to meeting the general requirements for advancement a working group must show that the specification has met all working group requirements or explain why the requirements have changed or been deferred no requirements have changed have been met distributed naming possible constrained url syntax use of urls to fetch manifest files small set of standardized short strings for convenience must document changes to dependencies during the development of the specification no dependencies have changed must how adequate implementation experience will be demonstrated must specify the deadline for comments which must be at least four weeks after publication and should be longer for complex documents must show that the specification has received wide review and reviews by and input from management may identify features in the document as at risk these features may be removed before advancement to proposed recommendation without a requirement to publish a new candidate recommendation no features at risk | 1 |
19,843 | 26,244,388,562 | IssuesEvent | 2023-01-05 14:10:24 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Using 'multiprocessing’ in PyTorch on Windows10 got error-``RuntimeError: Couldn’t open shared file mapping: <torch_13684_4004974554>, error code: <0>'' | module: windows module: multiprocessing triaged | ## 🐛 Bug
Hi,
I am currently running a PyTorch code on Windows10 using PyCharm. This code firstly utilised `DataLoader` function (`num_workers’=4) to load training data:
`train_loader = DataLoader(train_dset, batch_size, shuffle=True,
num_workers=4, collate_fn=trim_collate)`
Then, in training process, it utilised a `for’ loop to load training data and train the model:
`for i, (v, norm_bb, q, target, _, _, bb, spa_adj_matrix,
sem_adj_matrix) in enumerate(train_loader):`
**Error:** I got the following error messages when running above `for’ loop:
`0%| | 0/6934 [00:00<?, ?it/s]Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "E:\PyTorch_env\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "E:\PyTorch_env\lib\site-packages\torch\multiprocessing\reductions.py", line 286, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <torch_13684_4004974554>, error code: <0>
python-BaseException
Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
python-BaseException
0%| | 0/6934 [00:07<?, ?it/s]`
It seems that there are some issues with the `multiprocessing` function on Windows system. I have tried to put all the codes (including `def train()`) under `if__name__ == '__main__':`, but it didn't work-the same error message still existed.
Could you please let me know if there are any possible solutions for this?
The environment settings:
1. Windows10, PyCharm
2. PyTorch v1.0.1, torchvision v0.2.2, Python 3.7.11
3. One GPU node
Many thanks!
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm | 1.0 | Using 'multiprocessing’ in PyTorch on Windows10 got error-``RuntimeError: Couldn’t open shared file mapping: <torch_13684_4004974554>, error code: <0>'' - ## 🐛 Bug
Hi,
I am currently running a PyTorch code on Windows10 using PyCharm. This code firstly utilised `DataLoader` function (`num_workers’=4) to load training data:
`train_loader = DataLoader(train_dset, batch_size, shuffle=True,
num_workers=4, collate_fn=trim_collate)`
Then, in training process, it utilised a `for’ loop to load training data and train the model:
`for i, (v, norm_bb, q, target, _, _, bb, spa_adj_matrix,
sem_adj_matrix) in enumerate(train_loader):`
**Error:** I got the following error messages when running above `for’ loop:
`0%| | 0/6934 [00:00<?, ?it/s]Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "E:\PyTorch_env\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "E:\PyTorch_env\lib\site-packages\torch\multiprocessing\reductions.py", line 286, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <torch_13684_4004974554>, error code: <0>
python-BaseException
Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
python-BaseException
0%| | 0/6934 [00:07<?, ?it/s]`
It seems that there are some issues with the `multiprocessing` function on Windows system. I have tried to put all the codes (including `def train()`) under `if__name__ == '__main__':`, but it didn't work-the same error message still existed.
Could you please let me know if there are any possible solutions for this?
The environment settings:
1. Windows10, PyCharm
2. PyTorch v1.0.1, torchvision v0.2.2, Python 3.7.11
3. One GPU node
Many thanks!
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm | process | using multiprocessing’ in pytorch on got error runtimeerror couldn’t open shared file mapping error code 🐛 bug hi i am currently running a pytorch code on using pycharm this code firstly utilised dataloader function num workers’ to load training data train loader dataloader train dset batch size shuffle true num workers collate fn trim collate then in training process it utilised a for’ loop to load training data and train the model for i v norm bb q target bb spa adj matrix sem adj matrix in enumerate train loader error i got the following error messages when running above for’ loop traceback most recent call last file e pytorch env lib multiprocessing popen spawn py line in init reduction dump process obj to child file e pytorch env lib multiprocessing reduction py line in dump forkingpickler file protocol dump obj file e pytorch env lib site packages torch multiprocessing reductions py line in reduce storage metadata storage share filename runtimeerror couldn t open shared file mapping error code python baseexception traceback most recent call last file e pytorch env lib multiprocessing spawn py line in main self reduction pickle load from parent eoferror ran out of input python baseexception it seems that there are some issues with the multiprocessing function on windows system i have tried to put all the codes including def train under if name main but it didn t work the same error message still existed could you please let me know if there are any possible solutions for this the environment settings pycharm pytorch torchvision python one gpu node many thanks cc mszhanyi nbcsm | 1 |
20,650 | 27,326,174,814 | IssuesEvent | 2023-02-25 03:20:22 | cse442-at-ub/project_s23-team-infinity | https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity | opened | As a user who wants be able to create an account on the calendar, I want the application to remember my account so I can't log back in using my username and password. | Processing Task Sprint 1 | 1. Go to calender sign up page.
2. Sign up with username and password.
3. Sign out after creating account and close application.
4. Open application and sign in using username and password.
5. Make sure account exist. | 1.0 | As a user who wants be able to create an account on the calendar, I want the application to remember my account so I can't log back in using my username and password. - 1. Go to calender sign up page.
2. Sign up with username and password.
3. Sign out after creating account and close application.
4. Open application and sign in using username and password.
5. Make sure account exist. | process | as a user who wants be able to create an account on the calendar i want the application to remember my account so i can t log back in using my username and password go to calender sign up page sign up with username and password sign out after creating account and close application open application and sign in using username and password make sure account exist | 1 |
129,078 | 27,389,394,708 | IssuesEvent | 2023-02-28 15:20:39 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | [NPC] Brazie Getz - Missing | Fixed on PTR - Tester Confirmed Code Change | **Links:**
https://wowpedia.fandom.com/wiki/Brazie_Getz
https://www.wow-mania.com/armory/?npc=37904
**What is Happening:**
This NPC is not spawning after Alliance kill Deathbringer Saurfang
**What Should happen:**
Brazie Getz is a [gnome](https://wowpedia.fandom.com/wiki/Gnome) [general goods](https://wowpedia.fandom.com/wiki/General_goods) vendor found at [Deathbringer's Rise](https://wowpedia.fandom.com/wiki/Deathbringer%27s_Rise) in [Icecrown Citadel](https://wowpedia.fandom.com/wiki/Icecrown_Citadel_(instance)) after the Alliance defeats [Deathbringer Saurfang](https://wowpedia.fandom.com/wiki/Deathbringer_Saurfang).
| 1.0 | [NPC] Brazie Getz - Missing - **Links:**
https://wowpedia.fandom.com/wiki/Brazie_Getz
https://www.wow-mania.com/armory/?npc=37904
**What is Happening:**
This NPC is not spawning after Alliance kill Deathbringer Saurfang
**What Should happen:**
Brazie Getz is a [gnome](https://wowpedia.fandom.com/wiki/Gnome) [general goods](https://wowpedia.fandom.com/wiki/General_goods) vendor found at [Deathbringer's Rise](https://wowpedia.fandom.com/wiki/Deathbringer%27s_Rise) in [Icecrown Citadel](https://wowpedia.fandom.com/wiki/Icecrown_Citadel_(instance)) after the Alliance defeats [Deathbringer Saurfang](https://wowpedia.fandom.com/wiki/Deathbringer_Saurfang).
| non_process | brazie getz missing links what is happening this npc is not spawning after alliance kill deathbringer saurfang what should happen brazie getz is a vendor found at in after the alliance defeats | 0 |
101,470 | 31,162,546,992 | IssuesEvent | 2023-08-16 17:04:42 | RTXteam/RTX-KG2 | https://api.github.com/repos/RTXteam/RTX-KG2 | closed | apparent bug in kg_json_to_tsv.py | verify this fix in next kg2 build high priority | In `kg_json_to_tsv.py`, this line of code doesn't seem right:
https://github.com/RTXteam/RTX-KG2/blob/1514437f5b81f1b457458a16d1775e8d081222a5/kg_json_to_tsv.py#L297
The construction `not "None"`, i.e., the predicate for the `if` statement, always evaluates to `False`. I think maybe what was intended was:
```
value = (edge['qualified_predicate'] if edge['qualified_predicate'] is not "None" else edge['source_predicate'])
``` | 1.0 | apparent bug in kg_json_to_tsv.py - In `kg_json_to_tsv.py`, this line of code doesn't seem right:
https://github.com/RTXteam/RTX-KG2/blob/1514437f5b81f1b457458a16d1775e8d081222a5/kg_json_to_tsv.py#L297
The construction `not "None"`, i.e., the predicate for the `if` statement, always evaluates to `False`. I think maybe what was intended was:
```
value = (edge['qualified_predicate'] if edge['qualified_predicate'] is not "None" else edge['source_predicate'])
``` | non_process | apparent bug in kg json to tsv py in kg json to tsv py this line of code doesn t seem right the construction not none i e the predicate for the if statement always evaluates to false i think maybe what was intended was value edge if edge is not none else edge | 0 |
39,674 | 2,858,303,232 | IssuesEvent | 2015-06-03 01:08:24 | Ombridride/minetest-minetestforfun-server | https://api.github.com/repos/Ombridride/minetest-minetestforfun-server | closed | Problem with glasses | Modding Priority@Medium | # Texture problem
The mod *framedglass* has a problem with it's texture, if you don't have activated *connected_glass* in your minetest graphical configuration (or minetest.conf), you see ugly grey nodes wich replaced the original texture...
### Without *connected_glass*

### With *connected_glass*

The problem isn't here with another mod like the default glasses or with an another mod like *stained_glass* mod
# Connected option missing
The "clean_glass" doesn't have the connected option, we need to add it ! | 1.0 | Problem with glasses - # Texture problem
The mod *framedglass* has a problem with it's texture, if you don't have activated *connected_glass* in your minetest graphical configuration (or minetest.conf), you see ugly grey nodes wich replaced the original texture...
### Without *connected_glass*

### With *connected_glass*

The problem isn't here with another mod like the default glasses or with an another mod like *stained_glass* mod
# Connected option missing
The "clean_glass" doesn't have the connected option, we need to add it ! | non_process | problem with glasses texture problem the mod framedglass has a problem with it s texture if you don t have activated connected glass in your minetest graphical configuration or minetest conf you see ugly grey nodes wich replaced the original texture without connected glass with connected glass the problem isn t here with another mod like the default glasses or with an another mod like stained glass mod connected option missing the clean glass doesn t have the connected option we need to add it | 0 |
7,272 | 10,425,815,674 | IssuesEvent | 2019-09-16 16:09:38 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Merge terms for plant defense responses | multi-species process | Hello,
Continuing the work on the multi-species processes, aiming to remove terms that represent processes for which there is no evidence exists, I would like to merge the following terms:
TERM | MERGE INTO
-- | --
GO:0052445 modulation by organism of defense-related salicylic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052081 modulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052252 negative regulation by organism of defense-related salicylic acid-mediated signal transduction pathway of other organism involved in symbiotic interaction | GO:0052003 negative regulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052270 positive regulation by organism of defense-related salicylic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052072 positive regulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052443 modulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052082 modulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052267 negative regulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052069 negative regulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052271 positive regulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052073 positive regulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052279 modulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052084 modulation by symbiont of host ethylene-mediated defense response
GO:0052254 negative regulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052005 negative regulation by symbiont of host ethylene-mediated defense response
GO:0052274 positive regulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052076 positive regulation by symbiont of host ethylene-mediated defense response
GO:0052268 negative regulation by organism of defense-related ethylene-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052005 negative regulation by symbiont of host ethylene-mediated defense response
GO:0052269 positive regulation by organism of defense-related ethylene-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052076 positive regulation by symbiont of host ethylene-mediated defense response
The reason is that these are plant-specific defense responses, and AFAIK there is no evidence that processes such as 'positive regulation by host of symbiont ethylene-mediated defense response - I dont think plants are ever symbionts ??
@tberardini Is this a reasonable proposal ?
Thanks, Pascale | 1.0 | Merge terms for plant defense responses - Hello,
Continuing the work on the multi-species processes, aiming to remove terms that represent processes for which there is no evidence exists, I would like to merge the following terms:
TERM | MERGE INTO
-- | --
GO:0052445 modulation by organism of defense-related salicylic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052081 modulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052252 negative regulation by organism of defense-related salicylic acid-mediated signal transduction pathway of other organism involved in symbiotic interaction | GO:0052003 negative regulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052270 positive regulation by organism of defense-related salicylic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052072 positive regulation by symbiont of defense-related host salicylic acid-mediated signal transduction pathway
GO:0052443 modulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052082 modulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052267 negative regulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052069 negative regulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052271 positive regulation by organism of defense-related jasmonic acid-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052073 positive regulation by symbiont of defense-related host jasmonic acid-mediated signal transduction pathway
GO:0052279 modulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052084 modulation by symbiont of host ethylene-mediated defense response
GO:0052254 negative regulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052005 negative regulation by symbiont of host ethylene-mediated defense response
GO:0052274 positive regulation by organism of ethylene-mediated defense response of other organism involved in symbiotic interaction | GO:0052076 positive regulation by symbiont of host ethylene-mediated defense response
GO:0052268 negative regulation by organism of defense-related ethylene-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052005 negative regulation by symbiont of host ethylene-mediated defense response
GO:0052269 positive regulation by organism of defense-related ethylene-mediated signal transduction pathway in other organism involved in symbiotic interaction | GO:0052076 positive regulation by symbiont of host ethylene-mediated defense response
The reason is that these are plant-specific defense responses, and AFAIK there is no evidence that processes such as 'positive regulation by host of symbiont ethylene-mediated defense response - I dont think plants are ever symbionts ??
@tberardini Is this a reasonable proposal ?
Thanks, Pascale | process | merge terms for plant defense responses hello continuing the work on the multi species processes aiming to remove terms that represent processes for which there is no evidence exists i would like to merge the following terms term merge into go modulation by organism of defense related salicylic acid mediated signal transduction pathway in other organism involved in symbiotic interaction go modulation by symbiont of defense related host salicylic acid mediated signal transduction pathway go negative regulation by organism of defense related salicylic acid mediated signal transduction pathway of other organism involved in symbiotic interaction go negative regulation by symbiont of defense related host salicylic acid mediated signal transduction pathway go positive regulation by organism of defense related salicylic acid mediated signal transduction pathway in other organism involved in symbiotic interaction go positive regulation by symbiont of defense related host salicylic acid mediated signal transduction pathway go modulation by organism of defense related jasmonic acid mediated signal transduction pathway in other organism involved in symbiotic interaction go modulation by symbiont of defense related host jasmonic acid mediated signal transduction pathway go negative regulation by organism of defense related jasmonic acid mediated signal transduction pathway in other organism involved in symbiotic interaction go negative regulation by symbiont of defense related host jasmonic acid mediated signal transduction pathway go positive regulation by organism of defense related jasmonic acid mediated signal transduction pathway in other organism involved in symbiotic interaction go positive regulation by symbiont of defense related host jasmonic acid mediated signal transduction pathway go modulation by organism of ethylene mediated defense response of other organism involved in symbiotic interaction go modulation by symbiont of host ethylene mediated defense response go negative regulation by organism of ethylene mediated defense response of other organism involved in symbiotic interaction go negative regulation by symbiont of host ethylene mediated defense response go positive regulation by organism of ethylene mediated defense response of other organism involved in symbiotic interaction go positive regulation by symbiont of host ethylene mediated defense response go negative regulation by organism of defense related ethylene mediated signal transduction pathway in other organism involved in symbiotic interaction go negative regulation by symbiont of host ethylene mediated defense response go positive regulation by organism of defense related ethylene mediated signal transduction pathway in other organism involved in symbiotic interaction go positive regulation by symbiont of host ethylene mediated defense response the reason is that these are plant specific defense responses and afaik there is no evidence that processes such as positive regulation by host of symbiont ethylene mediated defense response i dont think plants are ever symbionts tberardini is this a reasonable proposal thanks pascale | 1 |
330,154 | 10,035,486,148 | IssuesEvent | 2019-07-18 08:28:07 | JorySchiebroek/brofish | https://api.github.com/repos/JorySchiebroek/brofish | closed | Wrong dependencies and package versions installed | PRIORITY bug | Running `npm install` shows a lot of dependency errors.
Most importantly, when running `ng serve` a bunch of errors show up:
```shell
joryschiebroek@Jorys-MacBook-Pro brofish (development) $ ng serve
10% building 4/4 modules 0 activeℹ 「wds」: Project is running at http://localhost:4200/webpack-dev-server/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: 404s will fallback to //index.html
chunk {main} main.js, main.js.map (main) 2.03 kB [initial] [rendered]
chunk {polyfills} polyfills.js, polyfills.js.map (polyfills) 122 kB [initial] [rendered]
chunk {polyfills-es5} polyfills-es5.js, polyfills-es5.js.map (polyfills-es5) 392 kB [initial] [rendered]
chunk {runtime} runtime.js, runtime.js.map (runtime) 6.09 kB [entry] [rendered]
chunk {styles} styles.js, styles.js.map (styles) 1020 kB [initial] [rendered]
chunk {vendor} vendor.js, vendor.js.map (vendor) 338 kB [initial] [rendered]
Date: 2019-07-16T22:36:35.701Z - Hash: e358c1d6f78527fef3e4 - Time: 12287ms
ERROR in error TS1149: File name '/Users/joryschiebroek/Projects/angular/brofish/src/app/services/achievements/achievements.ts' differs from already included file name '/Users/joryschiebroek/Projects/angular/brofish/src/app/services/achievements/ACHIEVEMENTS.ts' only in casing.
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:25:15 - error TS2416: Property '_registry' in type 'MockMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type 'Map<string, MockMediaQueryList>' is not assignable to type 'Map<string, MediaQueryList>'.
Type 'MockMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
25 protected _registry: Map<string, MockMediaQueryList>;
~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:63:22 - error TS2420: Class 'MockMediaQueryList' incorrectly implements interface 'MediaQueryList'.
Type 'MockMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
63 export declare class MockMediaQueryList implements MediaQueryList {
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:80:27 - error TS2304: Cannot find name 'MediaQueryListListener'.
80 addListener(listener: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:82:23 - error TS2304: Cannot find name 'MediaQueryListListener'.
82 removeListener(_: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:11:22 - error TS2420: Class 'ServerMediaQueryList' incorrectly implements interface 'MediaQueryList'.
Type 'ServerMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
11 export declare class ServerMediaQueryList implements MediaQueryList {
~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:28:27 - error TS2304: Cannot find name 'MediaQueryListListener'.
28 addListener(listener: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:30:23 - error TS2304: Cannot find name 'MediaQueryListListener'.
30 removeListener(_: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:42:15 - error TS2416: Property '_registry' in type 'ServerMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type 'Map<string, ServerMediaQueryList>' is not assignable to type 'Map<string, MediaQueryList>'.
Type 'ServerMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
42 protected _registry: Map<string, ServerMediaQueryList>;
~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:54:15 - error TS2416: Property '_buildMQL' in type 'ServerMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type '(query: string) => ServerMediaQueryList' is not assignable to type '(query: string) => MediaQueryList'.
Type 'ServerMediaQueryList' is not assignable to type 'MediaQueryList'.
54 protected _buildMQL(query: string): ServerMediaQueryList;
~~~~~~~~~
app/components/dashboard/add/add-dialog.component.ts:26:6 - error TS2554: Expected 2 arguments, but got 1.
26 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/dashboard/dashboard.component.ts:38:6 - error TS2554: Expected 2 arguments, but got 1.
38 @ViewChild('profileList') profileList: any;
~~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439m }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/dashboard/progress/progress.component.ts:16:6 - error TS2554: Expected 2 arguments, but got 1.
16 @ViewChild('chart') chart: ChartComponent;
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:46:6 - error TS2554: Expected 2 arguments, but got 1.
46 @ViewChild('menuTrigger') menuTrigger: MatMenuTrigger;
~~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:47:6 - error TS2554: Expected 2 arguments, but got 1.
47 @ViewChild(MatPaginator) paginator: MatPaginator;
~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:48:6 - error TS2554: Expected 2 arguments, but got 1.
48 @ViewChild(MatSort) sort: MatSort;
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/login/login.component.ts:160:6 - error TS2554: Expected 2 arguments, but got 1.
160 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/settings/personal/add-license/add-license.dialog.ts:14:6 - error TS2554: Expected 2 arguments, but got 1.
14 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ **
ℹ 「wdm」: Failed to compile.
/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:28
return (new fsevents(path)).on('fsevent', callback).start();
^
TypeError: fsevents is not a constructor
at createFSEventsInstance (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:28:11)
at setFSEventsListener (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:82:16)
at FSWatcher.FsEventsHandler._watchWithFsEvents (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:252:16)
at FSWatcher.<anonymous> (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:386:25)
at LOOP (fs.js:1565:14)
at process._tickCallback (internal/process/next_tick.js:61:11)
joryschiebroek@Jorys-MacBook-Pro brofish (development) $
``` | 1.0 | Wrong dependencies and package versions installed - Running `npm install` shows a lot of dependency errors.
Most importantly, when running `ng serve` a bunch of errors show up:
```shell
joryschiebroek@Jorys-MacBook-Pro brofish (development) $ ng serve
10% building 4/4 modules 0 activeℹ 「wds」: Project is running at http://localhost:4200/webpack-dev-server/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: 404s will fallback to //index.html
chunk {main} main.js, main.js.map (main) 2.03 kB [initial] [rendered]
chunk {polyfills} polyfills.js, polyfills.js.map (polyfills) 122 kB [initial] [rendered]
chunk {polyfills-es5} polyfills-es5.js, polyfills-es5.js.map (polyfills-es5) 392 kB [initial] [rendered]
chunk {runtime} runtime.js, runtime.js.map (runtime) 6.09 kB [entry] [rendered]
chunk {styles} styles.js, styles.js.map (styles) 1020 kB [initial] [rendered]
chunk {vendor} vendor.js, vendor.js.map (vendor) 338 kB [initial] [rendered]
Date: 2019-07-16T22:36:35.701Z - Hash: e358c1d6f78527fef3e4 - Time: 12287ms
ERROR in error TS1149: File name '/Users/joryschiebroek/Projects/angular/brofish/src/app/services/achievements/achievements.ts' differs from already included file name '/Users/joryschiebroek/Projects/angular/brofish/src/app/services/achievements/ACHIEVEMENTS.ts' only in casing.
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:25:15 - error TS2416: Property '_registry' in type 'MockMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type 'Map<string, MockMediaQueryList>' is not assignable to type 'Map<string, MediaQueryList>'.
Type 'MockMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
25 protected _registry: Map<string, MockMediaQueryList>;
~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:63:22 - error TS2420: Class 'MockMediaQueryList' incorrectly implements interface 'MediaQueryList'.
Type 'MockMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
63 export declare class MockMediaQueryList implements MediaQueryList {
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:80:27 - error TS2304: Cannot find name 'MediaQueryListListener'.
80 addListener(listener: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/mock/mock-match-media.d.ts:82:23 - error TS2304: Cannot find name 'MediaQueryListListener'.
82 removeListener(_: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:11:22 - error TS2420: Class 'ServerMediaQueryList' incorrectly implements interface 'MediaQueryList'.
Type 'ServerMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
11 export declare class ServerMediaQueryList implements MediaQueryList {
~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:28:27 - error TS2304: Cannot find name 'MediaQueryListListener'.
28 addListener(listener: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:30:23 - error TS2304: Cannot find name 'MediaQueryListListener'.
30 removeListener(_: MediaQueryListListener): void;
~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:42:15 - error TS2416: Property '_registry' in type 'ServerMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type 'Map<string, ServerMediaQueryList>' is not assignable to type 'Map<string, MediaQueryList>'.
Type 'ServerMediaQueryList' is missing the following properties from type 'MediaQueryList': onchange, addEventListener, removeEventListener, dispatchEvent
42 protected _registry: Map<string, ServerMediaQueryList>;
~~~~~~~~~
../node_modules/@angular/flex-layout/core/typings/match-media/server-match-media.d.ts:54:15 - error TS2416: Property '_buildMQL' in type 'ServerMatchMedia' is not assignable to the same property in base type 'MatchMedia'.
Type '(query: string) => ServerMediaQueryList' is not assignable to type '(query: string) => MediaQueryList'.
Type 'ServerMediaQueryList' is not assignable to type 'MediaQueryList'.
54 protected _buildMQL(query: string): ServerMediaQueryList;
~~~~~~~~~
app/components/dashboard/add/add-dialog.component.ts:26:6 - error TS2554: Expected 2 arguments, but got 1.
26 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/dashboard/dashboard.component.ts:38:6 - error TS2554: Expected 2 arguments, but got 1.
38 @ViewChild('profileList') profileList: any;
~~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439m }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/dashboard/progress/progress.component.ts:16:6 - error TS2554: Expected 2 arguments, but got 1.
16 @ViewChild('chart') chart: ChartComponent;
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:46:6 - error TS2554: Expected 2 arguments, but got 1.
46 @ViewChild('menuTrigger') menuTrigger: MatMenuTrigger;
~~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:47:6 - error TS2554: Expected 2 arguments, but got 1.
47 @ViewChild(MatPaginator) paginator: MatPaginator;
~~~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/database/database.component.ts:48:6 - error TS2554: Expected 2 arguments, but got 1.
48 @ViewChild(MatSort) sort: MatSort;
~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/login/login.component.ts:160:6 - error TS2554: Expected 2 arguments, but got 1.
160 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
app/components/settings/personal/add-license/add-license.dialog.ts:14:6 - error TS2554: Expected 2 arguments, but got 1.
14 @ViewChild(MatStepper) stepper: MatStepper;
~~~~~~~~~~~~~~~~~~~~~
../node_modules/@angular/core/core.d.ts:8436:47
8436 (selector: Type<any> | Function | string, opts: {
~~~~~~~
8437 read?: any;
~~~~~~~~~~~~~~~~~~~
8438 static: boolean;
~~~~~~~~~~~~~~~~~~~~~~~~
8439 }): any;
~~~~~
An argument for 'opts' was not provided.
** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ **
ℹ 「wdm」: Failed to compile.
/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:28
return (new fsevents(path)).on('fsevent', callback).start();
^
TypeError: fsevents is not a constructor
at createFSEventsInstance (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:28:11)
at setFSEventsListener (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:82:16)
at FSWatcher.FsEventsHandler._watchWithFsEvents (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:252:16)
at FSWatcher.<anonymous> (/Users/joryschiebroek/Projects/angular/brofish/node_modules/watchpack/node_modules/chokidar/lib/fsevents-handler.js:386:25)
at LOOP (fs.js:1565:14)
at process._tickCallback (internal/process/next_tick.js:61:11)
joryschiebroek@Jorys-MacBook-Pro brofish (development) $
``` | non_process | wrong dependencies and package versions installed running npm install shows a lot of dependency errors most importantly when running ng serve a bunch of errors show up shell joryschiebroek jorys macbook pro brofish development ng serve building modules activeℹ 「wds」 project is running at ℹ 「wds」 webpack output is served from ℹ 「wds」 will fallback to index html chunk main main js main js map main kb chunk polyfills polyfills js polyfills js map polyfills kb chunk polyfills polyfills js polyfills js map polyfills kb chunk runtime runtime js runtime js map runtime kb chunk styles styles js styles js map styles kb chunk vendor vendor js vendor js map vendor kb date hash time error in error file name users joryschiebroek projects angular brofish src app services achievements achievements ts differs from already included file name users joryschiebroek projects angular brofish src app services achievements achievements ts only in casing node modules angular flex layout core typings match media mock mock match media d ts error property registry in type mockmatchmedia is not assignable to the same property in base type matchmedia type map is not assignable to type map type mockmediaquerylist is missing the following properties from type mediaquerylist onchange addeventlistener removeeventlistener dispatchevent protected registry map node modules angular flex layout core typings match media mock mock match media d ts error class mockmediaquerylist incorrectly implements interface mediaquerylist type mockmediaquerylist is missing the following properties from type mediaquerylist onchange addeventlistener removeeventlistener dispatchevent export declare class mockmediaquerylist implements mediaquerylist node modules angular flex layout core typings match media mock mock match media d ts error cannot find name mediaquerylistlistener addlistener listener mediaquerylistlistener void node modules angular flex layout core typings match media mock mock match media d ts error cannot find name mediaquerylistlistener removelistener mediaquerylistlistener void node modules angular flex layout core typings match media server match media d ts error class servermediaquerylist incorrectly implements interface mediaquerylist type servermediaquerylist is missing the following properties from type mediaquerylist onchange addeventlistener removeeventlistener dispatchevent export declare class servermediaquerylist implements mediaquerylist node modules angular flex layout core typings match media server match media d ts error cannot find name mediaquerylistlistener addlistener listener mediaquerylistlistener void node modules angular flex layout core typings match media server match media d ts error cannot find name mediaquerylistlistener removelistener mediaquerylistlistener void node modules angular flex layout core typings match media server match media d ts error property registry in type servermatchmedia is not assignable to the same property in base type matchmedia type map is not assignable to type map type servermediaquerylist is missing the following properties from type mediaquerylist onchange addeventlistener removeeventlistener dispatchevent protected registry map node modules angular flex layout core typings match media server match media d ts error property buildmql in type servermatchmedia is not assignable to the same property in base type matchmedia type query string servermediaquerylist is not assignable to type query string mediaquerylist type servermediaquerylist is not assignable to type mediaquerylist protected buildmql query string servermediaquerylist app components dashboard add add dialog component ts error expected arguments but got viewchild matstepper stepper matstepper node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components dashboard dashboard component ts error expected arguments but got viewchild profilelist profilelist any node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components dashboard progress progress component ts error expected arguments but got viewchild chart chart chartcomponent node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components database database component ts error expected arguments but got viewchild menutrigger menutrigger matmenutrigger node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components database database component ts error expected arguments but got viewchild matpaginator paginator matpaginator node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components database database component ts error expected arguments but got viewchild matsort sort matsort node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components login login component ts error expected arguments but got viewchild matstepper stepper matstepper node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided app components settings personal add license add license dialog ts error expected arguments but got viewchild matstepper stepper matstepper node modules angular core core d ts selector type function string opts read any static boolean any an argument for opts was not provided angular live development server is listening on localhost open your browser on ℹ 「wdm」 failed to compile users joryschiebroek projects angular brofish node modules watchpack node modules chokidar lib fsevents handler js return new fsevents path on fsevent callback start typeerror fsevents is not a constructor at createfseventsinstance users joryschiebroek projects angular brofish node modules watchpack node modules chokidar lib fsevents handler js at setfseventslistener users joryschiebroek projects angular brofish node modules watchpack node modules chokidar lib fsevents handler js at fswatcher fseventshandler watchwithfsevents users joryschiebroek projects angular brofish node modules watchpack node modules chokidar lib fsevents handler js at fswatcher users joryschiebroek projects angular brofish node modules watchpack node modules chokidar lib fsevents handler js at loop fs js at process tickcallback internal process next tick js joryschiebroek jorys macbook pro brofish development | 0 |
282,241 | 24,459,348,404 | IssuesEvent | 2022-10-07 09:41:31 | elastic/e2e-testing | https://api.github.com/repos/elastic/e2e-testing | opened | Flaky Test [Initializing / End-To-End Tests / fleet_debian_10_amd64_fleet_mode / Revoking the enrollment token for the agent – Fleet Mode] | flaky-test ci-reported | ## Flaky Test
* **Test Name:** `Initializing / End-To-End Tests / fleet_debian_10_amd64_fleet_mode / Revoking the enrollment token for the agent – Fleet Mode`
* **Artifact Link:** https://beats-ci.elastic.co/blue/organizations/jenkins/e2e-tests%2Fe2e-testing-mbp%2F7.17/detail/7.17/1253/
* **PR:** None
* **Commit:** 30c169fe337554f77e3a3d2c910a27714669fc56
### Error details
```
Step the enrollment token is revoked
```
| 1.0 | Flaky Test [Initializing / End-To-End Tests / fleet_debian_10_amd64_fleet_mode / Revoking the enrollment token for the agent – Fleet Mode] - ## Flaky Test
* **Test Name:** `Initializing / End-To-End Tests / fleet_debian_10_amd64_fleet_mode / Revoking the enrollment token for the agent – Fleet Mode`
* **Artifact Link:** https://beats-ci.elastic.co/blue/organizations/jenkins/e2e-tests%2Fe2e-testing-mbp%2F7.17/detail/7.17/1253/
* **PR:** None
* **Commit:** 30c169fe337554f77e3a3d2c910a27714669fc56
### Error details
```
Step the enrollment token is revoked
```
| non_process | flaky test flaky test test name initializing end to end tests fleet debian fleet mode revoking the enrollment token for the agent – fleet mode artifact link pr none commit error details step the enrollment token is revoked | 0 |
435,784 | 30,519,295,455 | IssuesEvent | 2023-07-19 06:52:18 | ansys/pyfluent | https://api.github.com/repos/ansys/pyfluent | opened | Update flobject.py (remove dunders changes 2) | documentation | ### Description of the modifications
Include latest remove dunders changes.
### Useful links and references
_No response_ | 1.0 | Update flobject.py (remove dunders changes 2) - ### Description of the modifications
Include latest remove dunders changes.
### Useful links and references
_No response_ | non_process | update flobject py remove dunders changes description of the modifications include latest remove dunders changes useful links and references no response | 0 |
15,404 | 19,596,016,174 | IssuesEvent | 2022-01-05 17:56:38 | jgraley/inferno-cpp2v | https://api.github.com/repos/jgraley/inferno-cpp2v | closed | `OffEndLink` fixes | Constraint Processing | - `OffEndLink` -> `OffEndXLink`
In the future, this will be a legit value in solutions (see [comment](https://github.com/jgraley/inferno-cpp2v/issues/467#issuecomment-1005435563)).
So: check that `OffEndXLink` only comes out of successors in the knowledge, and add " != OffEndXLink" clauses to symbolics that use successor (needs #468).
Then: don't check for it in `SymbolicConstraint::Test()`
But: this will go wrong until #469 is done - use this as a proving handle for that issue. | 1.0 | `OffEndLink` fixes - - `OffEndLink` -> `OffEndXLink`
In the future, this will be a legit value in solutions (see [comment](https://github.com/jgraley/inferno-cpp2v/issues/467#issuecomment-1005435563)).
So: check that `OffEndXLink` only comes out of successors in the knowledge, and add " != OffEndXLink" clauses to symbolics that use successor (needs #468).
Then: don't check for it in `SymbolicConstraint::Test()`
But: this will go wrong until #469 is done - use this as a proving handle for that issue. | process | offendlink fixes offendlink offendxlink in the future this will be a legit value in solutions see so check that offendxlink only comes out of successors in the knowledge and add offendxlink clauses to symbolics that use successor needs then don t check for it in symbolicconstraint test but this will go wrong until is done use this as a proving handle for that issue | 1 |
14,119 | 17,016,209,835 | IssuesEvent | 2021-07-02 12:25:51 | damb/scdetect | https://api.github.com/repos/damb/scdetect | closed | Handle changing sampling rates | enhancement processing | Currently, concrete implementations of `WaveformProcessor` do not handle changing sampling rates (w.r.t. records).
OT: Note that the corresponding Seiscomp core implementation i.e. [Seiscomp::Processing::WaveformProcessor](https://github.com/SeisComP/common/blob/85be802a3f7d983a255fbf26d3cbb387085ad4d1/libs/seiscomp/processing/waveformprocessor.h#L42) doesn't handle changing sampling rates, too (at least for the time being). | 1.0 | Handle changing sampling rates - Currently, concrete implementations of `WaveformProcessor` do not handle changing sampling rates (w.r.t. records).
OT: Note that the corresponding Seiscomp core implementation i.e. [Seiscomp::Processing::WaveformProcessor](https://github.com/SeisComP/common/blob/85be802a3f7d983a255fbf26d3cbb387085ad4d1/libs/seiscomp/processing/waveformprocessor.h#L42) doesn't handle changing sampling rates, too (at least for the time being). | process | handle changing sampling rates currently concrete implementations of waveformprocessor do not handle changing sampling rates w r t records ot note that the corresponding seiscomp core implementation i e doesn t handle changing sampling rates too at least for the time being | 1 |
74,534 | 25,156,436,373 | IssuesEvent | 2022-11-10 13:54:02 | DependencyTrack/dependency-track | https://api.github.com/repos/DependencyTrack/dependency-track | opened | Mysterious "No value specified for parameter X " JDODataStoreException/PSQLException | defect in triage | ### Current Behavior
Mysterious "No value specified for parameter X " JDODataStoreException/PSQLException
I have now seen 3 occasions of the below problem, maybe someone recognizes this.
We're running DT 4.6.2 in docker in ECS with postgres RDS instances as datastore and a volume for the DT /data directory.
Every week updates are installed on the Docker host OS (EC2, Debian 11). This usually triggers an EC2 restart, and thus a docker restart.
The containers come up succesfully and users can browse DT from the UI. Also most API calls work fine, but not all of them.
Example 1: Everything seems to be working, except /api/v1/lookup?name=bla&version=xyz:
```
2022-11-02 21:06:23,000 ERROR [GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'alpine.model.Team' AS "DN_TYPE",,"A1"."ID""A1"."ID","A1"."NAME""A1"."NAME" AS "NUCORDER0",,"A1"."UUID","A1"."UUID", FROM FROM "APIKEYS_TEAMS" "A0" INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE EXISTS (SELECT 'alpine.model.ApiKey''alpine.model.ApiKey' AS "DN_TYPE", AS "DN_TYPE","A0_SUB"."ID""A0_SUB"."ID" AS "DN_APPID" FROM "APIKEY" "A0_SUB""APIKEY" "A0_SUB" WHERE "A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID""A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"EXISTS (SELECT 'alpine.model.ApiKey''alpine.model.ApiKey' AS "DN_TYPE", AS "DN_TYPE","A0_SUB"."ID""A0_SUB"."ID" AS "DN_APPID" FROM "APIKEY" "A0_SUB""APIKEY" "A0_SUB" WHERE "A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID""A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AlpineQueryManager.getApiKey(AlpineQueryManager.java:102)
at alpine.server.auth.ApiKeyAuthenticationService.authenticate(ApiKeyAuthenticationService.java:67)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 2.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
```
This code is in the Alpine framework and about retrieving an APIKey. However the query contains only 1 parameter:
https://github.com/stevespringett/Alpine/blob/157958bfe7fa5ad95e653667f7ce3282aed23af1/alpine-infra/src/main/java/alpine/persistence/AlpineQueryManager.java#L100-L104
Which should be translated by datanucleus into 2 paramaters in the SQL query seen above.
So it's a mystery how parameter 1 can be set, but paramter 2 can be empty.
I spent some time digging around, but couldn't explain this.
So decided to take a professional approach: restart docker (so both containers).
This solved the problem.
Example 2: Everything seems to be working, except /api/v1/project/<uuid>:
```
curl -X 'GET' \
'https://dependencytrack-eindhoven.isaac.nl/api/v1/project/yy' \
-H 'accept: application/json' \
-H 'X-Api-Key: xx'
Uncaught internal server error
```
Exception
```
2022-11-08 00:00:02,614 [] ERROR [alpine.server.resources.GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'alpine.model.Team''alpine.model.Team' AS "DN_TYPE","A1"."ID""A1"."ID","A1"."NAME""A1"."NAME" AS "NUCORDER0",,"A1"."UUID","A0"
."PROJECT_ID""A0"."PROJECT_ID" FROM FROM "PROJECT_ACCESS_TEAMS" "A0" INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE EXIS
TS (SELECT 'org.dependencytrack.model.Project''org.dependencytrack.model.Project' AS "DN_TYPE" AS "DN_TYPE",,"A0_SUB"."ID" AS "DN_APPID" FROM FROM "PROJECT" "A0_SUB""PROJECT" "A0_SUB" WHERE "A
0_SUB"."UUID" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"EXISTS (SELECT 'org.dependencytrack.model.Project''org.dependencytrack.model.Project' AS "DN_TYPE" AS "DN_TYPE",,"A0
_SUB"."ID" AS "DN_APPID" FROM FROM "PROJECT" "A0_SUB""PROJECT" "A0_SUB" WHERE "A0_SUB"."UUID" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AbstractAlpineQueryManager.getObjectByUuid(AbstractAlpineQueryManager.java:549)
at alpine.persistence.AbstractAlpineQueryManager.getObjectByUuid(AbstractAlpineQueryManager.java:563)
at org.dependencytrack.resources.v1.ProjectResource.getProject(ProjectResource.java:113)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 2.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
```
Again this doesn't make sense as there is only 1 paramater uuid, and again parameter 2 is considered empty?
Restarted docker, problem solved.
Example 3: Everything seems to be working, except /api/v1/projects/lookup?name=valentijn&version=master:
```
curl -X 'GET' \
'https://deptrack/api/v1/project/lookup?name=some%2Fmiddleware+%28java%29&version=master' \
-H 'accept: application/json' \
-H 'X-Api-Key: xx'
Uncaught internal server error
```
Exception
```
2022-11-10 09:19:35,979 [] ERROR [alpine.server.resources.GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'org.dependencytrack.model.Tag' AS "DN_TYPE","A1"."ID","A1"."NAME" AS "NUCORDER0","A0"."PROJECT_ID" FROM "PROJECTS_TAGS" "
A0" INNER JOIN "TAG" "A1" ON "A0"."TAG_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Project' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "PROJECT" "A0_SUB" WHERE "A0_SU
B"."NAME" = ? AND "A0_SUB"."VERSION" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"'org.dependencytrack.model.Tag' AS "DN_TYPE","A1"."ID","A1"."NAME" AS "NUCORDER0","A0"."P
ROJECT_ID","A1"."NAME" AS "NUCORDER0" FROM "PROJECTS_TAGS" "A0" INNER JOIN "TAG" "A1" ON "A0"."TAG_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Project' AS "DN_TYPE","A0_
SUB"."ID" AS "DN_APPID" FROM "PROJECT" "A0_SUB" WHERE "A0_SUB"."NAME" = ? AND "A0_SUB"."VERSION" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.executeWithMap(JDOQuery.java:331)
at org.dependencytrack.persistence.ProjectQueryManager.getProject(ProjectQueryManager.java:198)
at org.dependencytrack.persistence.QueryManager.getProject(QueryManager.java:340)
at org.dependencytrack.resources.v1.ProjectResource.getProject(ProjectResource.java:145)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 3.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
```
Parameter 3 is the version. Which is set and should be present in the query as it's mandatory. If I remove the version from the url, it works.
DT says project cannot be found. But I haven't checked if that is the actual query returning nothing, or an early exit because the version is mandatory.
Anyway, I already knew a restart would probably fix it, so I gathered some logs and restarted docker again.
Anyone has any ideas / suggestions?
I realize this might not be a bug in DT nor in Alpine, but maybe others are seeing the same or have some suggestions.
It is not a resourcing issue as this instance is mostly idle with only 1 nightly job to upload 100 BOMs. And a nightly job that retrieves all projects to load them into backstage.
It seems that once the instance is running OK, the problem stays away and it only occurs after a restart.
Valentijn
### Steps to Reproduce
see above
### Expected Behavior
see above
### Dependency-Track Version
4.6.2
### Dependency-Track Distribution
Container Image
### Database Server
PostgreSQL
### Database Server Version
13.7
### Browser
N/A
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported | 1.0 | Mysterious "No value specified for parameter X " JDODataStoreException/PSQLException - ### Current Behavior
Mysterious "No value specified for parameter X " JDODataStoreException/PSQLException
I have now seen 3 occasions of the below problem, maybe someone recognizes this.
We're running DT 4.6.2 in docker in ECS with postgres RDS instances as datastore and a volume for the DT /data directory.
Every week updates are installed on the Docker host OS (EC2, Debian 11). This usually triggers an EC2 restart, and thus a docker restart.
The containers come up succesfully and users can browse DT from the UI. Also most API calls work fine, but not all of them.
Example 1: Everything seems to be working, except /api/v1/lookup?name=bla&version=xyz:
```
2022-11-02 21:06:23,000 ERROR [GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'alpine.model.Team' AS "DN_TYPE",,"A1"."ID""A1"."ID","A1"."NAME""A1"."NAME" AS "NUCORDER0",,"A1"."UUID","A1"."UUID", FROM FROM "APIKEYS_TEAMS" "A0" INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE EXISTS (SELECT 'alpine.model.ApiKey''alpine.model.ApiKey' AS "DN_TYPE", AS "DN_TYPE","A0_SUB"."ID""A0_SUB"."ID" AS "DN_APPID" FROM "APIKEY" "A0_SUB""APIKEY" "A0_SUB" WHERE "A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID""A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"EXISTS (SELECT 'alpine.model.ApiKey''alpine.model.ApiKey' AS "DN_TYPE", AS "DN_TYPE","A0_SUB"."ID""A0_SUB"."ID" AS "DN_APPID" FROM "APIKEY" "A0_SUB""APIKEY" "A0_SUB" WHERE "A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID""A0_SUB"."APIKEY" = ? AND "A0"."APIKEY_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AlpineQueryManager.getApiKey(AlpineQueryManager.java:102)
at alpine.server.auth.ApiKeyAuthenticationService.authenticate(ApiKeyAuthenticationService.java:67)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 2.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
```
This code is in the Alpine framework and about retrieving an APIKey. However the query contains only 1 parameter:
https://github.com/stevespringett/Alpine/blob/157958bfe7fa5ad95e653667f7ce3282aed23af1/alpine-infra/src/main/java/alpine/persistence/AlpineQueryManager.java#L100-L104
Which should be translated by datanucleus into 2 paramaters in the SQL query seen above.
So it's a mystery how parameter 1 can be set, but paramter 2 can be empty.
I spent some time digging around, but couldn't explain this.
So decided to take a professional approach: restart docker (so both containers).
This solved the problem.
Example 2: Everything seems to be working, except /api/v1/project/<uuid>:
```
curl -X 'GET' \
'https://dependencytrack-eindhoven.isaac.nl/api/v1/project/yy' \
-H 'accept: application/json' \
-H 'X-Api-Key: xx'
Uncaught internal server error
```
Exception
```
2022-11-08 00:00:02,614 [] ERROR [alpine.server.resources.GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'alpine.model.Team''alpine.model.Team' AS "DN_TYPE","A1"."ID""A1"."ID","A1"."NAME""A1"."NAME" AS "NUCORDER0",,"A1"."UUID","A0"
."PROJECT_ID""A0"."PROJECT_ID" FROM FROM "PROJECT_ACCESS_TEAMS" "A0" INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE INNER JOIN "TEAM" "A1" ON "A0"."TEAM_ID" = "A1"."ID" WHERE EXIS
TS (SELECT 'org.dependencytrack.model.Project''org.dependencytrack.model.Project' AS "DN_TYPE" AS "DN_TYPE",,"A0_SUB"."ID" AS "DN_APPID" FROM FROM "PROJECT" "A0_SUB""PROJECT" "A0_SUB" WHERE "A
0_SUB"."UUID" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"EXISTS (SELECT 'org.dependencytrack.model.Project''org.dependencytrack.model.Project' AS "DN_TYPE" AS "DN_TYPE",,"A0
_SUB"."ID" AS "DN_APPID" FROM FROM "PROJECT" "A0_SUB""PROJECT" "A0_SUB" WHERE "A0_SUB"."UUID" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AbstractAlpineQueryManager.getObjectByUuid(AbstractAlpineQueryManager.java:549)
at alpine.persistence.AbstractAlpineQueryManager.getObjectByUuid(AbstractAlpineQueryManager.java:563)
at org.dependencytrack.resources.v1.ProjectResource.getProject(ProjectResource.java:113)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 2.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
```
Again this doesn't make sense as there is only 1 paramater uuid, and again parameter 2 is considered empty?
Restarted docker, problem solved.
Example 3: Everything seems to be working, except /api/v1/projects/lookup?name=valentijn&version=master:
```
curl -X 'GET' \
'https://deptrack/api/v1/project/lookup?name=some%2Fmiddleware+%28java%29&version=master' \
-H 'accept: application/json' \
-H 'X-Api-Key: xx'
Uncaught internal server error
```
Exception
```
2022-11-10 09:19:35,979 [] ERROR [alpine.server.resources.GlobalExceptionHandler] Uncaught internal server error
javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'org.dependencytrack.model.Tag' AS "DN_TYPE","A1"."ID","A1"."NAME" AS "NUCORDER0","A0"."PROJECT_ID" FROM "PROJECTS_TAGS" "
A0" INNER JOIN "TAG" "A1" ON "A0"."TAG_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Project' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "PROJECT" "A0_SUB" WHERE "A0_SU
B"."NAME" = ? AND "A0_SUB"."VERSION" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"'org.dependencytrack.model.Tag' AS "DN_TYPE","A1"."ID","A1"."NAME" AS "NUCORDER0","A0"."P
ROJECT_ID","A1"."NAME" AS "NUCORDER0" FROM "PROJECTS_TAGS" "A0" INNER JOIN "TAG" "A1" ON "A0"."TAG_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Project' AS "DN_TYPE","A0_
SUB"."ID" AS "DN_APPID" FROM "PROJECT" "A0_SUB" WHERE "A0_SUB"."NAME" = ? AND "A0_SUB"."VERSION" = ? AND "A0"."PROJECT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:605)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.executeWithMap(JDOQuery.java:331)
at org.dependencytrack.persistence.ProjectQueryManager.getProject(ProjectQueryManager.java:198)
at org.dependencytrack.persistence.QueryManager.getProject(QueryManager.java:340)
at org.dependencytrack.resources.v1.ProjectResource.getProject(ProjectResource.java:145)
...
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 3.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:340)
```
Parameter 3 is the version. Which is set and should be present in the query as it's mandatory. If I remove the version from the url, it works.
DT says project cannot be found. But I haven't checked if that is the actual query returning nothing, or an early exit because the version is mandatory.
Anyway, I already knew a restart would probably fix it, so I gathered some logs and restarted docker again.
Anyone has any ideas / suggestions?
I realize this might not be a bug in DT nor in Alpine, but maybe others are seeing the same or have some suggestions.
It is not a resourcing issue as this instance is mostly idle with only 1 nightly job to upload 100 BOMs. And a nightly job that retrieves all projects to load them into backstage.
It seems that once the instance is running OK, the problem stays away and it only occurs after a restart.
Valentijn
### Steps to Reproduce
see above
### Expected Behavior
see above
### Dependency-Track Version
4.6.2
### Dependency-Track Distribution
Container Image
### Database Server
PostgreSQL
### Database Server Version
13.7
### Browser
N/A
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported | non_process | mysterious no value specified for parameter x jdodatastoreexception psqlexception current behavior mysterious no value specified for parameter x jdodatastoreexception psqlexception i have now seen occasions of the below problem maybe someone recognizes this we re running dt in docker in ecs with postgres rds instances as datastore and a volume for the dt data directory every week updates are installed on the docker host os debian this usually triggers an restart and thus a docker restart the containers come up succesfully and users can browse dt from the ui also most api calls work fine but not all of them example everything seems to be working except api lookup name bla version xyz error uncaught internal server error javax jdo jdodatastoreexception iteration request failed select alpine model team as dn type id id name name as uuid uuid from from apikeys teams inner join team on team id id where inner join team on team id id where exists select alpine model apikey alpine model apikey as dn type as dn type sub id sub id as dn appid from apikey sub apikey sub where sub apikey and apikey id sub id sub apikey and apikey id sub id order by exists select alpine model apikey alpine model apikey as dn type as dn type sub id sub id as dn appid from apikey sub apikey sub where sub apikey and apikey id sub id sub apikey and apikey id sub id order by at org datanucleus api jdo jdoadapter getjdoexceptionfornucleusexception jdoadapter java at org datanucleus api jdo jdoquery executeinternal jdoquery java at org datanucleus api jdo jdoquery execute jdoquery java at alpine persistence alpinequerymanager getapikey alpinequerymanager java at alpine server auth apikeyauthenticationservice authenticate apikeyauthenticationservice java caused by org postgresql util psqlexception no value specified for parameter at org postgresql core simpleparameterlist checkallparametersset simpleparameterlist java at org postgresql core queryexecutorimpl execute queryexecutorimpl java at org postgresql jdbc pgstatement executeinternal pgstatement java at org postgresql jdbc pgstatement execute pgstatement java this code is in the alpine framework and about retrieving an apikey however the query contains only parameter which should be translated by datanucleus into paramaters in the sql query seen above so it s a mystery how parameter can be set but paramter can be empty i spent some time digging around but couldn t explain this so decided to take a professional approach restart docker so both containers this solved the problem example everything seems to be working except api project curl x get h accept application json h x api key xx uncaught internal server error exception error uncaught internal server error javax jdo jdodatastoreexception iteration request failed select alpine model team alpine model team as dn type id id name name as uuid project id project id from from project access teams inner join team on team id id where inner join team on team id id where exis ts select org dependencytrack model project org dependencytrack model project as dn type as dn type sub id as dn appid from from project sub project sub where a sub uuid and project id sub id order by exists select org dependencytrack model project org dependencytrack model project as dn type as dn type sub id as dn appid from from project sub project sub where sub uuid and project id sub id order by at org datanucleus api jdo jdoadapter getjdoexceptionfornucleusexception jdoadapter java at org datanucleus api jdo jdoquery executeinternal jdoquery java at org datanucleus api jdo jdoquery execute jdoquery java at alpine persistence abstractalpinequerymanager getobjectbyuuid abstractalpinequerymanager java at alpine persistence abstractalpinequerymanager getobjectbyuuid abstractalpinequerymanager java at org dependencytrack resources projectresource getproject projectresource java caused by org postgresql util psqlexception no value specified for parameter at org postgresql core simpleparameterlist checkallparametersset simpleparameterlist java at org postgresql core queryexecutorimpl execute queryexecutorimpl java again this doesn t make sense as there is only paramater uuid and again parameter is considered empty restarted docker problem solved example everything seems to be working except api projects lookup name valentijn version master curl x get h accept application json h x api key xx uncaught internal server error exception error uncaught internal server error javax jdo jdodatastoreexception iteration request failed select org dependencytrack model tag as dn type id name as project id from projects tags inner join tag on tag id id where exists select org dependencytrack model project as dn type sub id as dn appid from project sub where su b name and sub version and project id sub id order by org dependencytrack model tag as dn type id name as p roject id name as from projects tags inner join tag on tag id id where exists select org dependencytrack model project as dn type sub id as dn appid from project sub where sub name and sub version and project id sub id order by at org datanucleus api jdo jdoadapter getjdoexceptionfornucleusexception jdoadapter java at org datanucleus api jdo jdoquery executeinternal jdoquery java at org datanucleus api jdo jdoquery executewithmap jdoquery java at org dependencytrack persistence projectquerymanager getproject projectquerymanager java at org dependencytrack persistence querymanager getproject querymanager java at org dependencytrack resources projectresource getproject projectresource java caused by org postgresql util psqlexception no value specified for parameter at org postgresql core simpleparameterlist checkallparametersset simpleparameterlist java at org postgresql core queryexecutorimpl execute queryexecutorimpl java parameter is the version which is set and should be present in the query as it s mandatory if i remove the version from the url it works dt says project cannot be found but i haven t checked if that is the actual query returning nothing or an early exit because the version is mandatory anyway i already knew a restart would probably fix it so i gathered some logs and restarted docker again anyone has any ideas suggestions i realize this might not be a bug in dt nor in alpine but maybe others are seeing the same or have some suggestions it is not a resourcing issue as this instance is mostly idle with only nightly job to upload boms and a nightly job that retrieves all projects to load them into backstage it seems that once the instance is running ok the problem stays away and it only occurs after a restart valentijn steps to reproduce see above expected behavior see above dependency track version dependency track distribution container image database server postgresql database server version browser n a checklist i have read and understand the i have checked the for whether this defect was already reported | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.