Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
447,290 | 12,887,574,345 | IssuesEvent | 2020-07-13 11:29:40 | crestic-urca/remotelabz | https://api.github.com/repos/crestic-urca/remotelabz | closed | Start VM marche pas sur machine arrêté | normal priority | In GitLab by @fnolot on Feb 19, 2011, 19:00
Si un utilisateur arrête une VM par un halt ou un arrête sous Windows,
le clic droit sur Start ne fait rien. Par contre le Reboot relance bien
la VM.
Il faudra que le start fasse de même.
*(from redmine: issue id 13, created on 2011-02-19)* | 1.0 | Start VM marche pas sur machine arrêté - In GitLab by @fnolot on Feb 19, 2011, 19:00
Si un utilisateur arrête une VM par un halt ou un arrête sous Windows,
le clic droit sur Start ne fait rien. Par contre le Reboot relance bien
la VM.
Il faudra que le start fasse de même.
*(from redmine: issue id 13, created on 2011-02-19)* | priority | start vm marche pas sur machine arrêté in gitlab by fnolot on feb si un utilisateur arrête une vm par un halt ou un arrête sous windows le clic droit sur start ne fait rien par contre le reboot relance bien la vm il faudra que le start fasse de même from redmine issue id created on | 1 |
89,894 | 25,920,670,477 | IssuesEvent | 2022-12-15 21:37:05 | brtnfld/cgnsjira | https://api.github.com/repos/brtnfld/cgnsjira | opened | [CGNS-254] CGNS has several problems building and running on Mac M1. Bugs identified in this report. | bug To Do Build Critical |
> This issue has been migrated from the Forge. Read the [original ticket here](https://cgnsorg.atlassian.net/browse/CGNS-254).
- _**Reporter:**_ None
- _**Created at:**_ Sun, 27 Jun 2021 08:42:08 -0500
<p>I installed `cgns` on a Mac M1 Mini using `homebrew`. The library had errors, including a difficult to understand error in the subroutine `cg_conn_write_f`, and a ''missing'' subroutine in the cgns library (`cgns_f`). I installed `cgns`<br/>
from the github repository and made the library on my mac and had similar issues. Additionally, whene I built `cgns` and tested it using `make test`, some of the tests failed. I found the following errors in the distribution:</p>
<p>1. In the header file `fortran_macros.h`, `STR_PLEN` needs to be redefined for the Mac M1 Mini. Fortran appends to subroutine passed arguments hidden default integer giveing the length of character arguments. In the subroutine `cg_conn_write_f`, there are two character strings. On the Mac Mini, these hidden integer variables are `long`, not `int`. It appears from some debug statements in `cg_conn_write_f` that this is a problem that has been previously worked on but not solved.</p>
<p>2. The subroutine `cg_conn_write_f` was ''missing'' from the cgns library. This subroutine is a c subroutine, not a fortran subroutine. But the interface in `cgns_f.F90` had the `BIND` statement commented out. I uncommented and this fixed the problem. </p>
<p>3. Not all the `utilities` built correctly. I added `binaryio.o`, which is needed by `p3dfout.o`, to `utilities/Makefile.unix`, which corrected the problem. All `make test` passed.</p>
<p>Environment variables<br/>
=====================</p>
<p>This section shows environment variables used to build `cgns` on Mac M1 Mini.</p>
<p>~~~<br/>
FC=gfortran<br/>
FCFLAGS=-I/opt/homebrew/opt/libx11/include -fallow-argument-mismatch -std=gnu<br/>
LIBS=-L/opt/homebrew/Cellar/tcl-tk/8.6.11_1/lib/ -L/opt/homebrew/opt/libx11/lib<br/>
CPPFLAGS=-I/opt/homebrew/include/<br/>
CFLAGS=-I/opt/homebrew/include/<br/>
~~~</p>
<p>Config flags<br/>
============</p>
<p>This section shows config flags used to build `cgns` on Mac M1 Mini.</p>
<p>~~~<br/>
./configure --enable-gcc --with-fortran=LOWERCASE_ --enable-cgnstools --enable-64bit<br/>
~~~</p>
<p>Edits to code<br/>
=============</p>
<p>This section shows edits used to fix build.</p>
<p>src/cgnstools/utilities/Makefile.unix<br/>
---------------------------------</p>
<p>~~~<br/>
src % diff cgnstools/utilities/Makefile.unix ../../CGNS-master/src/cgnstools/utilities/Makefile.unix <br/>
82c82<br/>
< p3dfout.$(O) binaryio.$(O)<br/>
—<br/>
> p3dfout.$(O)<br/>
84c84<br/>
< getargs.$(O) p3dfout.$(O) binaryio.$(O) $(LDLIST)<br/>
—<br/>
> getargs.$(O) p3dfout.$(O) $(LDLIST)<br/>
~~~</p>
<p>src/fortran_macros.h<br/>
--------------------</p>
<p>~~~<br/>
src % diff fortran_macros.h ../../CGNS-master/src/fortran_macros.h <br/>
141c141<br/>
< # define STR_PLEN(str) , long CONCATENATE(Len,str)<br/>
—<br/>
> # define STR_PLEN(str) , int CONCATENATE(Len,str)<br/>
~~~</p>
<p>src/cgns_f.F90<br/>
-------------</p>
<p>~~~<br/>
src % diff cgns_f.F90 ../../CGNS-master/src/cgns_f.F90<br/>
3021c3021<br/>
< SUBROUTINE cg_state_size_f(size, ier) BIND(C, NAME="cg_state_size_f")<br/>
—<br/>
> SUBROUTINE cg_state_size_f(size, ier) !BIND(C, NAME="cg_state_size_f")<br/>
~~~</p>
| 1.0 | [CGNS-254] CGNS has several problems building and running on Mac M1. Bugs identified in this report. -
> This issue has been migrated from the Forge. Read the [original ticket here](https://cgnsorg.atlassian.net/browse/CGNS-254).
- _**Reporter:**_ None
- _**Created at:**_ Sun, 27 Jun 2021 08:42:08 -0500
<p>I installed `cgns` on a Mac M1 Mini using `homebrew`. The library had errors, including a difficult to understand error in the subroutine `cg_conn_write_f`, and a ''missing'' subroutine in the cgns library (`cgns_f`). I installed `cgns`<br/>
from the github repository and made the library on my mac and had similar issues. Additionally, whene I built `cgns` and tested it using `make test`, some of the tests failed. I found the following errors in the distribution:</p>
<p>1. In the header file `fortran_macros.h`, `STR_PLEN` needs to be redefined for the Mac M1 Mini. Fortran appends to subroutine passed arguments hidden default integer giveing the length of character arguments. In the subroutine `cg_conn_write_f`, there are two character strings. On the Mac Mini, these hidden integer variables are `long`, not `int`. It appears from some debug statements in `cg_conn_write_f` that this is a problem that has been previously worked on but not solved.</p>
<p>2. The subroutine `cg_conn_write_f` was ''missing'' from the cgns library. This subroutine is a c subroutine, not a fortran subroutine. But the interface in `cgns_f.F90` had the `BIND` statement commented out. I uncommented and this fixed the problem. </p>
<p>3. Not all the `utilities` built correctly. I added `binaryio.o`, which is needed by `p3dfout.o`, to `utilities/Makefile.unix`, which corrected the problem. All `make test` passed.</p>
<p>Environment variables<br/>
=====================</p>
<p>This section shows environment variables used to build `cgns` on Mac M1 Mini.</p>
<p>~~~<br/>
FC=gfortran<br/>
FCFLAGS=-I/opt/homebrew/opt/libx11/include -fallow-argument-mismatch -std=gnu<br/>
LIBS=-L/opt/homebrew/Cellar/tcl-tk/8.6.11_1/lib/ -L/opt/homebrew/opt/libx11/lib<br/>
CPPFLAGS=-I/opt/homebrew/include/<br/>
CFLAGS=-I/opt/homebrew/include/<br/>
~~~</p>
<p>Config flags<br/>
============</p>
<p>This section shows config flags used to build `cgns` on Mac M1 Mini.</p>
<p>~~~<br/>
./configure --enable-gcc --with-fortran=LOWERCASE_ --enable-cgnstools --enable-64bit<br/>
~~~</p>
<p>Edits to code<br/>
=============</p>
<p>This section shows edits used to fix build.</p>
<p>src/cgnstools/utilities/Makefile.unix<br/>
---------------------------------</p>
<p>~~~<br/>
src % diff cgnstools/utilities/Makefile.unix ../../CGNS-master/src/cgnstools/utilities/Makefile.unix <br/>
82c82<br/>
< p3dfout.$(O) binaryio.$(O)<br/>
—<br/>
> p3dfout.$(O)<br/>
84c84<br/>
< getargs.$(O) p3dfout.$(O) binaryio.$(O) $(LDLIST)<br/>
—<br/>
> getargs.$(O) p3dfout.$(O) $(LDLIST)<br/>
~~~</p>
<p>src/fortran_macros.h<br/>
--------------------</p>
<p>~~~<br/>
src % diff fortran_macros.h ../../CGNS-master/src/fortran_macros.h <br/>
141c141<br/>
< # define STR_PLEN(str) , long CONCATENATE(Len,str)<br/>
—<br/>
> # define STR_PLEN(str) , int CONCATENATE(Len,str)<br/>
~~~</p>
<p>src/cgns_f.F90<br/>
-------------</p>
<p>~~~<br/>
src % diff cgns_f.F90 ../../CGNS-master/src/cgns_f.F90<br/>
3021c3021<br/>
< SUBROUTINE cg_state_size_f(size, ier) BIND(C, NAME="cg_state_size_f")<br/>
—<br/>
> SUBROUTINE cg_state_size_f(size, ier) !BIND(C, NAME="cg_state_size_f")<br/>
~~~</p>
| non_priority | cgns has several problems building and running on mac bugs identified in this report this issue has been migrated from the forge read the reporter none created at sun jun i installed cgns on a mac mini using homebrew the library had errors including a difficult to understand error in the subroutine cg conn write f and a missing subroutine in the cgns library cgns f i installed cgns from the github repository and made the library on my mac and had similar issues additionally whene i built cgns and tested it using make test some of the tests failed i found the following errors in the distribution in the header file fortran macros h str plen needs to be redefined for the mac mini fortran appends to subroutine passed arguments hidden default integer giveing the length of character arguments in the subroutine cg conn write f there are two character strings on the mac mini these hidden integer variables are long not int it appears from some debug statements in cg conn write f that this is a problem that has been previously worked on but not solved the subroutine cg conn write f was missing from the cgns library this subroutine is a c subroutine not a fortran subroutine but the interface in cgns f had the bind statement commented out i uncommented and this fixed the problem not all the utilities built correctly i added binaryio o which is needed by o to utilities makefile unix which corrected the problem all make test passed environment variables this section shows environment variables used to build cgns on mac mini fc gfortran fcflags i opt homebrew opt include fallow argument mismatch std gnu libs l opt homebrew cellar tcl tk lib l opt homebrew opt lib cppflags i opt homebrew include cflags i opt homebrew include config flags this section shows config flags used to build cgns on mac mini configure enable gcc with fortran lowercase enable cgnstools enable edits to code this section shows edits used to fix build src cgnstools utilities makefile unix src diff cgnstools utilities makefile unix cgns master src cgnstools utilities makefile unix o getargs o o ldlist src fortran macros h src diff fortran macros h cgns master src fortran macros h define str plen str int concatenate len str src cgns f src diff cgns f cgns master src cgns f subroutine cg state size f size ier bind c name cg state size f | 0 |
118,167 | 25,265,390,485 | IssuesEvent | 2022-11-16 03:54:57 | Azure/autorest.python | https://api.github.com/repos/Azure/autorest.python | closed | LRO basics from CADL | DPG DPG/RLC v2.0b2 Epic: Parity with DPG 1.0 WS: Code Generation | First level of support for LRO in CADL, is to consider the presence of `@Azure.Core.pollingOperation` as a boolean switch to generate LRO code, like `x-ms-long-running-operation: true` was doing in Swagger:
```
@Azure.Core.pollingOperation(Authoring.createProjectStatusMonitor)
createProject is Azure.Core.LongRunningResourceCreateWithServiceProvidedName<Project>;
```
([full example](https://cadlplayground.z22.web.core.windows.net/cadl-azure/?c=aW1wb3J0ICJAY2FkbC1sYW5nL3ZlcnNpb25pbmciOwrJIGF6dXJlLXRvb2xzL8UsxhFjb3JlIjsKCkBzZXJ2aWNlVGl0bGUoIlRleHQgYXV0aG9yxEgpCi8vQFbJWC7HY2VkKFPGN8cccykKbmFtZXNwYWNlIMREQchDOwoKZW51bSDPMCB7CiAgdjE6ICIyMDIxLTDEAyIsxBQyxhQyLTA5LTEyIiwKfQoKQENhZGwuUmVzdC5yZXNvdXJjZSgicHJvamVjdHMiKQptb2RlbCBQxhHFW0BrZXnEB3Zpc2liaWxpdHkoInJlYWQiKQogIGlkOiBzdOYAniAgZGVzY3JpcHRpb24%2FyhgKICDkANLLEn0KCmludGVyZuQA5OkA4MVuY3JlYXRl5wCAU3RhdHVzTW9uaXRvciBpcyBDdXN0b21Db3JlLlBvbGxpbmdPcGVyYcRxPMc0PsVxQEHkAZsuxSpwzyooyXIu2m7kAN3NHsR%2Fy1VMb25nUnVu5AHLUucBSEPFL1dpdGjnAaJQcm92aWRlZE5hbWXuAJ9kZWxl2FrIT0TFJMs4ICBsaXN0xxBz1zZMaXN0zTRkZXBsb3nea0Fj5QFCCiAgIOgB%2B%2BQCQSDlAZLEAXNsb3TkAMbsAe0gIH3GJccyCiAgxXFnZegApNduUmVhZOsAo30K6wLx6gHhxXzrAq5wYXJlbnTIRChU5AGh9QLNb%2BgB6OQCzyAg5gLR6QH95gHlyEc8VD7vAKVGb3VuZMY%2FLs815wEO5wMQICDJaUntAwV95QKA5QCaSHR0cC5yb3V08gCXb3Ag8QLCVOYA%2FvAAk%2B0BOPkAycc85QFR)) | 1.0 | LRO basics from CADL - First level of support for LRO in CADL, is to consider the presence of `@Azure.Core.pollingOperation` as a boolean switch to generate LRO code, like `x-ms-long-running-operation: true` was doing in Swagger:
```
@Azure.Core.pollingOperation(Authoring.createProjectStatusMonitor)
createProject is Azure.Core.LongRunningResourceCreateWithServiceProvidedName<Project>;
```
([full example](https://cadlplayground.z22.web.core.windows.net/cadl-azure/?c=aW1wb3J0ICJAY2FkbC1sYW5nL3ZlcnNpb25pbmciOwrJIGF6dXJlLXRvb2xzL8UsxhFjb3JlIjsKCkBzZXJ2aWNlVGl0bGUoIlRleHQgYXV0aG9yxEgpCi8vQFbJWC7HY2VkKFPGN8cccykKbmFtZXNwYWNlIMREQchDOwoKZW51bSDPMCB7CiAgdjE6ICIyMDIxLTDEAyIsxBQyxhQyLTA5LTEyIiwKfQoKQENhZGwuUmVzdC5yZXNvdXJjZSgicHJvamVjdHMiKQptb2RlbCBQxhHFW0BrZXnEB3Zpc2liaWxpdHkoInJlYWQiKQogIGlkOiBzdOYAniAgZGVzY3JpcHRpb24%2FyhgKICDkANLLEn0KCmludGVyZuQA5OkA4MVuY3JlYXRl5wCAU3RhdHVzTW9uaXRvciBpcyBDdXN0b21Db3JlLlBvbGxpbmdPcGVyYcRxPMc0PsVxQEHkAZsuxSpwzyooyXIu2m7kAN3NHsR%2Fy1VMb25nUnVu5AHLUucBSEPFL1dpdGjnAaJQcm92aWRlZE5hbWXuAJ9kZWxl2FrIT0TFJMs4ICBsaXN0xxBz1zZMaXN0zTRkZXBsb3nea0Fj5QFCCiAgIOgB%2B%2BQCQSDlAZLEAXNsb3TkAMbsAe0gIH3GJccyCiAgxXFnZegApNduUmVhZOsAo30K6wLx6gHhxXzrAq5wYXJlbnTIRChU5AGh9QLNb%2BgB6OQCzyAg5gLR6QH95gHlyEc8VD7vAKVGb3VuZMY%2FLs815wEO5wMQICDJaUntAwV95QKA5QCaSHR0cC5yb3V08gCXb3Ag8QLCVOYA%2FvAAk%2B0BOPkAycc85QFR)) | non_priority | lro basics from cadl first level of support for lro in cadl is to consider the presence of azure core pollingoperation as a boolean switch to generate lro code like x ms long running operation true was doing in swagger azure core pollingoperation authoring createprojectstatusmonitor createproject is azure core longrunningresourcecreatewithserviceprovidedname | 0 |
100,618 | 11,200,485,496 | IssuesEvent | 2020-01-03 21:56:31 | tomapper/tomapper | https://api.github.com/repos/tomapper/tomapper | opened | Come up with a library concept | documentation help wanted | ### Languages
- Java 1.7+
### Description
The library makes it easy to convert one class to another using annotations or stream APIs.
### TODO
- Create an algorithm of work
- Create prototype | 1.0 | Come up with a library concept - ### Languages
- Java 1.7+
### Description
The library makes it easy to convert one class to another using annotations or stream APIs.
### TODO
- Create an algorithm of work
- Create prototype | non_priority | come up with a library concept languages java description the library makes it easy to convert one class to another using annotations or stream apis todo create an algorithm of work create prototype | 0 |
253,550 | 8,057,567,141 | IssuesEvent | 2018-08-02 15:43:56 | SUSE/DeepSea | https://api.github.com/repos/SUSE/DeepSea | closed | Exercise stage.5 ( removal ) in functional/system tests | QA enhancement priority | ### Description of Issue/Question
After hitting a bug in stage.5 it's time to create a test to exercise the code in stage.5
Potential issues:
- Teuthology only deploys _one_ node which can't be removed without bringing down the cluster.
| 1.0 | Exercise stage.5 ( removal ) in functional/system tests - ### Description of Issue/Question
After hitting a bug in stage.5 it's time to create a test to exercise the code in stage.5
Potential issues:
- Teuthology only deploys _one_ node which can't be removed without bringing down the cluster.
| priority | exercise stage removal in functional system tests description of issue question after hitting a bug in stage it s time to create a test to exercise the code in stage potential issues teuthology only deploys one node which can t be removed without bringing down the cluster | 1 |
202,703 | 7,051,503,881 | IssuesEvent | 2018-01-03 12:05:45 | chingu-voyage3/bears-31-api | https://api.github.com/repos/chingu-voyage3/bears-31-api | closed | Write login endpoint logic | api feature high priority | Implement the functionality to find a user with the username passed in the request body and comparing the hash of the password with the one stored.
If the credentials are correct, generate a new JWT token with a payload containing *only* the username, then return the token.
If the credentials were incorrect, return an error message specifying it. | 1.0 | Write login endpoint logic - Implement the functionality to find a user with the username passed in the request body and comparing the hash of the password with the one stored.
If the credentials are correct, generate a new JWT token with a payload containing *only* the username, then return the token.
If the credentials were incorrect, return an error message specifying it. | priority | write login endpoint logic implement the functionality to find a user with the username passed in the request body and comparing the hash of the password with the one stored if the credentials are correct generate a new jwt token with a payload containing only the username then return the token if the credentials were incorrect return an error message specifying it | 1 |
813,892 | 30,478,284,729 | IssuesEvent | 2023-07-17 18:13:14 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | [DDL][Post-Upgarde] Not found: Unable to find schema name for YSQL table system_postgres.sequences_data due to error | kind/bug area/ysql priority/low status/awaiting-triage jira-originated | Jira Link: [DB-7269](https://yugabyte.atlassian.net/browse/DB-7269)
| 1.0 | [DDL][Post-Upgarde] Not found: Unable to find schema name for YSQL table system_postgres.sequences_data due to error - Jira Link: [DB-7269](https://yugabyte.atlassian.net/browse/DB-7269)
| priority | not found unable to find schema name for ysql table system postgres sequences data due to error jira link | 1 |
722,914 | 24,878,368,325 | IssuesEvent | 2022-10-27 21:23:58 | WowRarity/Rarity | https://api.github.com/repos/WowRarity/Rarity | closed | Holiday reminders aren't displayed after logging in, until the main window is opened (?) | Priority: Low Status: Duplicate Type: Bug Complexity: TBD Category: GUI | Source: [Discord](https://discord.com/channels/788119147740790854/788119314909626388/1034115793572614154)
> Mine holiday reminder only shows me when i mouse over the mini rarity icon, then the text shows up in the middle of the screen, it dosent show when i log in on new character only when i mouse over it. How do i fix it?
| 1.0 | Holiday reminders aren't displayed after logging in, until the main window is opened (?) - Source: [Discord](https://discord.com/channels/788119147740790854/788119314909626388/1034115793572614154)
> Mine holiday reminder only shows me when i mouse over the mini rarity icon, then the text shows up in the middle of the screen, it dosent show when i log in on new character only when i mouse over it. How do i fix it?
| priority | holiday reminders aren t displayed after logging in until the main window is opened source mine holiday reminder only shows me when i mouse over the mini rarity icon then the text shows up in the middle of the screen it dosent show when i log in on new character only when i mouse over it how do i fix it | 1 |
275,277 | 8,575,543,323 | IssuesEvent | 2018-11-12 17:34:11 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Curve2D reader cannot read double-precision values | Bug Likelihood: 3 - Occasional Priority: Normal Severity: 2 - Minor Irritation | I created an ultra file with values > FLT_MAX.
The curve reader could not read the file.
I've attached the file.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1819
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Curve2D reader cannot read double-precision values
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 04/22/2014
Due date:
% Done: 0
Estimated time:
Created: 04/22/2014 01:14 pm
Updated: 07/16/2014 02:34 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
I created an ultra file with values > FLT_MAX.
The curve reader could not read the file.
I've attached the file.
Comments:
Made an attempt to fix the reader, and it can be made to use doubles and pass them along.However, the curve will not be rendered. I double-checked avtOpenGLCurveRenderer, and the values being passed to glVertex3dv are correct.Perhaps we should scale the values down before rendering?
Modified the reader to server up double. Very large values or very small values won't be rendered correctly without scaling. I added data and a test demonstrating this.Svn revision 23772-4.
| 1.0 | Curve2D reader cannot read double-precision values - I created an ultra file with values > FLT_MAX.
The curve reader could not read the file.
I've attached the file.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1819
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Curve2D reader cannot read double-precision values
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 04/22/2014
Due date:
% Done: 0
Estimated time:
Created: 04/22/2014 01:14 pm
Updated: 07/16/2014 02:34 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
I created an ultra file with values > FLT_MAX.
The curve reader could not read the file.
I've attached the file.
Comments:
Made an attempt to fix the reader, and it can be made to use doubles and pass them along.However, the curve will not be rendered. I double-checked avtOpenGLCurveRenderer, and the values being passed to glVertex3dv are correct.Perhaps we should scale the values down before rendering?
Modified the reader to server up double. Very large values or very small values won't be rendered correctly without scaling. I added data and a test demonstrating this.Svn revision 23772-4.
| priority | reader cannot read double precision values i created an ultra file with values flt max the curve reader could not read the file i ve attached the file redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject reader cannot read double precision values assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood occasional severity minor irritation found in version impact expected use os all support group any description i created an ultra file with values flt max the curve reader could not read the file i ve attached the file comments made an attempt to fix the reader and it can be made to use doubles and pass them along however the curve will not be rendered i double checked avtopenglcurverenderer and the values being passed to are correct perhaps we should scale the values down before rendering modified the reader to server up double very large values or very small values won t be rendered correctly without scaling i added data and a test demonstrating this svn revision | 1 |
263,874 | 8,302,828,444 | IssuesEvent | 2018-09-21 15:35:35 | Vhoyon/Discord-Bot | https://api.github.com/repos/Vhoyon/Discord-Bot | opened | Host a permanent version of our bot to have an online alias of our work | !Priority: Medium | Here are our options (more may be edited in) :
- [Heroku](https://www.heroku.com) - _possibility of using a [GitHub Student Pack account](https://education.github.com/pack) to get access to 2 years worth of non-sleeping dynos ([Hobby Dyno](https://devcenter.heroku.com/articles/dyno-types))_;
- Self-Hosted on a Raspberry-Pi;
- Buying some third-party server to run it so we don't have to deal with setting up the Raspberry-Pi with ssh and outside secure access (SSH, static IP address, etc). | 1.0 | Host a permanent version of our bot to have an online alias of our work - Here are our options (more may be edited in) :
- [Heroku](https://www.heroku.com) - _possibility of using a [GitHub Student Pack account](https://education.github.com/pack) to get access to 2 years worth of non-sleeping dynos ([Hobby Dyno](https://devcenter.heroku.com/articles/dyno-types))_;
- Self-Hosted on a Raspberry-Pi;
- Buying some third-party server to run it so we don't have to deal with setting up the Raspberry-Pi with ssh and outside secure access (SSH, static IP address, etc). | priority | host a permanent version of our bot to have an online alias of our work here are our options more may be edited in possibility of using a to get access to years worth of non sleeping dynos self hosted on a raspberry pi buying some third party server to run it so we don t have to deal with setting up the raspberry pi with ssh and outside secure access ssh static ip address etc | 1 |
103,859 | 22,490,830,917 | IssuesEvent | 2022-06-23 01:21:25 | gitpod-io/gitpod | https://api.github.com/repos/gitpod-io/gitpod | closed | [insiders] emit console.log on workspace boot if person is using insiders | meta: stale editor: code (browser) team: IDE aspect: error-handling | ## Is your feature request related to a problem? Please describe
If a person is using insiders emit a 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 informing them that they are running insiders:
- if someone is looking at console log it is likely because Gitpod isn't working for them
- if someone is using insiders then the console.log should nudge people to switch to stable
- inform person if problem persists please raise a support case
Related to https://github.com/gitpod-io/gitpod/pull/6221
## Additional context
https://www.gitpodstatus.com/incidents/7b5z20kf0c3j | 1.0 | [insiders] emit console.log on workspace boot if person is using insiders - ## Is your feature request related to a problem? Please describe
If a person is using insiders emit a 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 informing them that they are running insiders:
- if someone is looking at console log it is likely because Gitpod isn't working for them
- if someone is using insiders then the console.log should nudge people to switch to stable
- inform person if problem persists please raise a support case
Related to https://github.com/gitpod-io/gitpod/pull/6221
## Additional context
https://www.gitpodstatus.com/incidents/7b5z20kf0c3j | non_priority | emit console log on workspace boot if person is using insiders is your feature request related to a problem please describe if a person is using insiders emit a 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 informing them that they are running insiders if someone is looking at console log it is likely because gitpod isn t working for them if someone is using insiders then the console log should nudge people to switch to stable inform person if problem persists please raise a support case related to additional context | 0 |
83,847 | 3,643,871,346 | IssuesEvent | 2016-02-15 06:13:11 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | In-container client config doesn't work for many common use cases because no namespace available | area/usability priority/P1 team/control-plane team/CSI | This is kind of last minute, but I'd like to consider this for 1.2 because we've had a recent rash of users who want to script this and are effectively blocked.
Today, by default service accounts inject one of their secrets into `/var/run/secrets/kubernetes.io/default`. That secret enables API access. We have support in the client config to read that value and provide a default client connection using that service account, which means `kubectl` inside of the container can effectively perform operations as the service account and is easily scriptable. The one gap today is that `kubectl` does not have access to a default namespace. Since the point of this injection is to enable easy use of the API by a pod, and most of our APIs that a pod would mutate are namespace scoped, and the service account is scoped to a namespace (although it may have other privileges), the fact that the secret comes with no identifying information as to the namespace means that in order to do "easy" integration with the API, the caller also has to inject the pod namespace via the downward API.
I would like to add a `namespace` key to each service account token secret, and change in-cluster-config to check for that file and use its value as a default namespace for kubectl and other kubectl like tools. The secret is a "default injected convention" and the convention is meant to enable easy API integration. Auto-injecting a downward API volume / env is more complex.
This enables scenarios like:
* Create a pod that has the command `/bin/sh -c "kubectl rolling-upgrade $SOURCE $DESTINATION && curl -X POST https://myserver/deployment/successful`
* Fetch endpoints for the current service from any pod that has kubectl `kubectl get endpoints db -o jsonpath={.endpoints.subset.addresses}`
| 1.0 | In-container client config doesn't work for many common use cases because no namespace available - This is kind of last minute, but I'd like to consider this for 1.2 because we've had a recent rash of users who want to script this and are effectively blocked.
Today, by default service accounts inject one of their secrets into `/var/run/secrets/kubernetes.io/default`. That secret enables API access. We have support in the client config to read that value and provide a default client connection using that service account, which means `kubectl` inside of the container can effectively perform operations as the service account and is easily scriptable. The one gap today is that `kubectl` does not have access to a default namespace. Since the point of this injection is to enable easy use of the API by a pod, and most of our APIs that a pod would mutate are namespace scoped, and the service account is scoped to a namespace (although it may have other privileges), the fact that the secret comes with no identifying information as to the namespace means that in order to do "easy" integration with the API, the caller also has to inject the pod namespace via the downward API.
I would like to add a `namespace` key to each service account token secret, and change in-cluster-config to check for that file and use its value as a default namespace for kubectl and other kubectl like tools. The secret is a "default injected convention" and the convention is meant to enable easy API integration. Auto-injecting a downward API volume / env is more complex.
This enables scenarios like:
* Create a pod that has the command `/bin/sh -c "kubectl rolling-upgrade $SOURCE $DESTINATION && curl -X POST https://myserver/deployment/successful`
* Fetch endpoints for the current service from any pod that has kubectl `kubectl get endpoints db -o jsonpath={.endpoints.subset.addresses}`
| priority | in container client config doesn t work for many common use cases because no namespace available this is kind of last minute but i d like to consider this for because we ve had a recent rash of users who want to script this and are effectively blocked today by default service accounts inject one of their secrets into var run secrets kubernetes io default that secret enables api access we have support in the client config to read that value and provide a default client connection using that service account which means kubectl inside of the container can effectively perform operations as the service account and is easily scriptable the one gap today is that kubectl does not have access to a default namespace since the point of this injection is to enable easy use of the api by a pod and most of our apis that a pod would mutate are namespace scoped and the service account is scoped to a namespace although it may have other privileges the fact that the secret comes with no identifying information as to the namespace means that in order to do easy integration with the api the caller also has to inject the pod namespace via the downward api i would like to add a namespace key to each service account token secret and change in cluster config to check for that file and use its value as a default namespace for kubectl and other kubectl like tools the secret is a default injected convention and the convention is meant to enable easy api integration auto injecting a downward api volume env is more complex this enables scenarios like create a pod that has the command bin sh c kubectl rolling upgrade source destination curl x post fetch endpoints for the current service from any pod that has kubectl kubectl get endpoints db o jsonpath endpoints subset addresses | 1 |
356,723 | 10,596,827,170 | IssuesEvent | 2019-10-09 22:19:14 | InfiniteFlightAirportEditing/Airports | https://api.github.com/repos/InfiniteFlightAirportEditing/Airports | closed | VAID-Devi Ahilya Bai Holkar International/Indore Airport-MADHYA PRADESH-INDIA | Being Redone Low Priority | # Airport Name
Devi Ahilya Bai Holkar International Airport
# Country?
India
# Improvements that need to be made?
from scratch
# Are you working on this airport?
yes
# Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low)
Low priority
| 1.0 | VAID-Devi Ahilya Bai Holkar International/Indore Airport-MADHYA PRADESH-INDIA - # Airport Name
Devi Ahilya Bai Holkar International Airport
# Country?
India
# Improvements that need to be made?
from scratch
# Are you working on this airport?
yes
# Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low)
Low priority
| priority | vaid devi ahilya bai holkar international indore airport madhya pradesh india airport name devi ahilya bai holkar international airport country india improvements that need to be made from scratch are you working on this airport yes airport priority if event runway world us capital low low priority | 1 |
462,902 | 13,256,006,041 | IssuesEvent | 2020-08-20 11:58:20 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.transfermarkt.de - The country flag overlaps the site's logo | browser-fenix engine-gecko priority-normal severity-minor type-css-moz-document | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.transfermarkt.de/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: language selection at the top is broken (logos interfering with each other)
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.transfermarkt.de - The country flag overlaps the site's logo - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.transfermarkt.de/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: language selection at the top is broken (logos interfering with each other)
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | the country flag overlaps the site s logo url browser version firefox mobile operating system android tested another browser yes problem type design is broken description language selection at the top is broken logos interfering with each other steps to reproduce browser configuration none from with ❤️ | 1 |
10,800 | 9,104,562,527 | IssuesEvent | 2019-02-20 18:30:11 | Azure/azure-powershell | https://api.github.com/repos/Azure/azure-powershell | closed | Azure Automation cmdlets allows illegal characters (hypens) in DSC configuration name. GUI prevents deletion of resources with hypens in name. | Automation Service Attention automation-dsc | ### Description
<!-- Please provide a description of the issue you are facing -->
### Script/Steps for Reproduction
<!-- Please provide the necessary script(s) that reproduce the issue -->
```powershell
Import-AzAutomationDscConfiguration @AAParams -sourcepath configuration-with-hypens.ps1
```
Then try to delete it from the GUI.

### Module Version
0.4.0
### Workaround
Use underscores which are a supported character
### Recommended Fix
Add the input validation to the Azure Automation API backend, don't rely on clients to do the input validation (GUI does it, Az cmdlets don't)
| 1.0 | Azure Automation cmdlets allows illegal characters (hypens) in DSC configuration name. GUI prevents deletion of resources with hypens in name. - ### Description
<!-- Please provide a description of the issue you are facing -->
### Script/Steps for Reproduction
<!-- Please provide the necessary script(s) that reproduce the issue -->
```powershell
Import-AzAutomationDscConfiguration @AAParams -sourcepath configuration-with-hypens.ps1
```
Then try to delete it from the GUI.

### Module Version
0.4.0
### Workaround
Use underscores which are a supported character
### Recommended Fix
Add the input validation to the Azure Automation API backend, don't rely on clients to do the input validation (GUI does it, Az cmdlets don't)
| non_priority | azure automation cmdlets allows illegal characters hypens in dsc configuration name gui prevents deletion of resources with hypens in name description script steps for reproduction powershell import azautomationdscconfiguration aaparams sourcepath configuration with hypens then try to delete it from the gui module version workaround use underscores which are a supported character recommended fix add the input validation to the azure automation api backend don t rely on clients to do the input validation gui does it az cmdlets don t | 0 |
284,473 | 21,427,654,866 | IssuesEvent | 2022-04-23 00:01:48 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | Update CrunchyDB example to fit in the default namespace quota | documentation team/DXC | **Describe the issue**
The default namespace has a very small quota and the defaults resources don't really fit in there. Add to the example resource configs for all the containers so that it fits better in an HA config.
**Additional context**
https://github.com/bcgov/how-to-workshops/blob/master/crunchydb/high-availablility/PostgresCluster.yaml
```
oc explain postgrescluster.spec.monitoring.pgmonitor.exporter.resources
oc explain postgrescluster.spec.proxy.pgBouncer.resources
oc explain postgrescluster.spec.backups.pgbackrest.repoHost.resources
oc explain postgrescluster.spec.instances.resources
```
**Definition of done**
Example HA PostgresCluster can be run in the default quota namespace
| 1.0 | Update CrunchyDB example to fit in the default namespace quota - **Describe the issue**
The default namespace has a very small quota and the defaults resources don't really fit in there. Add to the example resource configs for all the containers so that it fits better in an HA config.
**Additional context**
https://github.com/bcgov/how-to-workshops/blob/master/crunchydb/high-availablility/PostgresCluster.yaml
```
oc explain postgrescluster.spec.monitoring.pgmonitor.exporter.resources
oc explain postgrescluster.spec.proxy.pgBouncer.resources
oc explain postgrescluster.spec.backups.pgbackrest.repoHost.resources
oc explain postgrescluster.spec.instances.resources
```
**Definition of done**
Example HA PostgresCluster can be run in the default quota namespace
| non_priority | update crunchydb example to fit in the default namespace quota describe the issue the default namespace has a very small quota and the defaults resources don t really fit in there add to the example resource configs for all the containers so that it fits better in an ha config additional context oc explain postgrescluster spec monitoring pgmonitor exporter resources oc explain postgrescluster spec proxy pgbouncer resources oc explain postgrescluster spec backups pgbackrest repohost resources oc explain postgrescluster spec instances resources definition of done example ha postgrescluster can be run in the default quota namespace | 0 |
374,249 | 26,108,282,205 | IssuesEvent | 2022-12-27 15:57:55 | TrixiEther/SmileBot | https://api.github.com/repos/TrixiEther/SmileBot | closed | Command processing mechanism | documentation enhancement | **Develop a bot management mechanism on the server**
Comand template:
![bot control word] [main commands] [flags]
Implement(for now):
1. Bot initialization on the server
2. Removing server-related information
3. Bot reinitialization on the server
4. Emoji usage statistics(type, used in message, used as reaction, summary)
| 1.0 | Command processing mechanism - **Develop a bot management mechanism on the server**
Comand template:
![bot control word] [main commands] [flags]
Implement(for now):
1. Bot initialization on the server
2. Removing server-related information
3. Bot reinitialization on the server
4. Emoji usage statistics(type, used in message, used as reaction, summary)
| non_priority | command processing mechanism develop a bot management mechanism on the server comand template implement for now bot initialization on the server removing server related information bot reinitialization on the server emoji usage statistics type used in message used as reaction summary | 0 |
118,457 | 9,991,276,978 | IssuesEvent | 2019-07-11 10:42:25 | mozilla-mobile/firefox-ios | https://api.github.com/repos/mozilla-mobile/firefox-ios | opened | [Sync Integration] Tear down method is failing when removing the fxaccount | Test Automation :robot: | The tests are passing but the final status reported is red due to this FxA [issue](https://github.com/mozilla/fxa/issues/1740)
If it is not solved soon, we will disable that last part so that the status after tests is correct. | 1.0 | [Sync Integration] Tear down method is failing when removing the fxaccount - The tests are passing but the final status reported is red due to this FxA [issue](https://github.com/mozilla/fxa/issues/1740)
If it is not solved soon, we will disable that last part so that the status after tests is correct. | non_priority | tear down method is failing when removing the fxaccount the tests are passing but the final status reported is red due to this fxa if it is not solved soon we will disable that last part so that the status after tests is correct | 0 |
208,084 | 7,135,642,845 | IssuesEvent | 2018-01-23 02:11:53 | cssconf/2018.cssconf.eu | https://api.github.com/repos/cssconf/2018.cssconf.eu | closed | Frontpage: Add "Arena, Eichenstraße 4" | priority: high | Below the city, this should have and extra line with the venue, as people keep asking about it.
The line should also be a link to a marker on googlemaps.
I suggest to use the same font-size and green color.

| 1.0 | Frontpage: Add "Arena, Eichenstraße 4" - Below the city, this should have and extra line with the venue, as people keep asking about it.
The line should also be a link to a marker on googlemaps.
I suggest to use the same font-size and green color.

| priority | frontpage add arena eichenstraße below the city this should have and extra line with the venue as people keep asking about it the line should also be a link to a marker on googlemaps i suggest to use the same font size and green color | 1 |
649,963 | 21,330,775,027 | IssuesEvent | 2022-04-18 08:03:30 | ZapSquared/Quaver | https://api.github.com/repos/ZapSquared/Quaver | closed | Everytime Quaver gets disconnected or gets an error from Lavalink, treats it as a crash. | type:bug released on @next affects:functionality priority:p0 | **Describe the bug**
What isn't working as intended, and what does it affect?
https://github.com/ZapSquared/Quaver/blob/21b864d268e172f02bf7bdd4bb9522ced5807e7d/events/music/error.js#L1-L11
Lavalink disconnection and error listeners and how it shuts down.
Affects: User experience, functionality
**Affected versions**
What versions are affected by this bug? (e.g. >=3.0.1, 2.5.1-2.6.3, >=1.2.0)
3.4.0-next.10 and so on.
**Steps to reproduce**
Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error)
1. Run Quaver
2. Run Lavalink
3. Play a track on Quaver on stage or voice channel
4. While a track is running, stop your Lavalink instance.
**Expected behavior**
What is expected to happen?
Not making Quaver treat Lavalink disconnection or errors as crashes since it's outside Quaver.
**Actual behavior**
What actually happens? Attach or add errors or screenshots here as well.
As Lavalink encounters an error, Quaver has a listener `bot.music.on('error, err`
It will shut down the process with an eventType of nothing.
shuttingDown does not filter that, so treats it as a crash.
https://github.com/ZapSquared/Quaver/blob/21b864d268e172f02bf7bdd4bb9522ced5807e7d/main.js#L120 | 1.0 | Everytime Quaver gets disconnected or gets an error from Lavalink, treats it as a crash. - **Describe the bug**
What isn't working as intended, and what does it affect?
https://github.com/ZapSquared/Quaver/blob/21b864d268e172f02bf7bdd4bb9522ced5807e7d/events/music/error.js#L1-L11
Lavalink disconnection and error listeners and how it shuts down.
Affects: User experience, functionality
**Affected versions**
What versions are affected by this bug? (e.g. >=3.0.1, 2.5.1-2.6.3, >=1.2.0)
3.4.0-next.10 and so on.
**Steps to reproduce**
Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error)
1. Run Quaver
2. Run Lavalink
3. Play a track on Quaver on stage or voice channel
4. While a track is running, stop your Lavalink instance.
**Expected behavior**
What is expected to happen?
Not making Quaver treat Lavalink disconnection or errors as crashes since it's outside Quaver.
**Actual behavior**
What actually happens? Attach or add errors or screenshots here as well.
As Lavalink encounters an error, Quaver has a listener `bot.music.on('error, err`
It will shut down the process with an eventType of nothing.
shuttingDown does not filter that, so treats it as a crash.
https://github.com/ZapSquared/Quaver/blob/21b864d268e172f02bf7bdd4bb9522ced5807e7d/main.js#L120 | priority | everytime quaver gets disconnected or gets an error from lavalink treats it as a crash describe the bug what isn t working as intended and what does it affect lavalink disconnection and error listeners and how it shuts down affects user experience functionality affected versions what versions are affected by this bug e g next and so on steps to reproduce steps to reproduce the behavior e g click on a button enter a value etc and see error run quaver run lavalink play a track on quaver on stage or voice channel while a track is running stop your lavalink instance expected behavior what is expected to happen not making quaver treat lavalink disconnection or errors as crashes since it s outside quaver actual behavior what actually happens attach or add errors or screenshots here as well as lavalink encounters an error quaver has a listener bot music on error err it will shut down the process with an eventtype of nothing shuttingdown does not filter that so treats it as a crash | 1 |
18,072 | 10,879,151,278 | IssuesEvent | 2019-11-16 22:58:50 | Azure/azure-sdk-for-net | https://api.github.com/repos/Azure/azure-sdk-for-net | closed | Create Options type to hold showStats and modelVersion | Client Cognitive Services TextAnalytics | Instead of taking both of these as parameters to the "kitchen sink" methods, we should have an options type. | 1.0 | Create Options type to hold showStats and modelVersion - Instead of taking both of these as parameters to the "kitchen sink" methods, we should have an options type. | non_priority | create options type to hold showstats and modelversion instead of taking both of these as parameters to the kitchen sink methods we should have an options type | 0 |
396,235 | 27,108,398,399 | IssuesEvent | 2023-02-15 13:46:28 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Document that `install-egress-gateway-routes` is specific to EKS | area/documentation good-first-issue feature/egress-gateway | The `install-egress-gateway-routes` flag, aka the `egressGateway.installRoutes` Helm value, is needed on EKS's ENI mode to steer traffic through the proper interfaces.
It is [documented in the egress gateway guide](https://docs.cilium.io/en/stable/gettingstarted/egress-gateway/#compatibility-with-cloud-environments), but we never mention that it's only meant for ENI mode. We should fix that as it can break things on other setups (e.g., known to cause issues on GKE). | 1.0 | Document that `install-egress-gateway-routes` is specific to EKS - The `install-egress-gateway-routes` flag, aka the `egressGateway.installRoutes` Helm value, is needed on EKS's ENI mode to steer traffic through the proper interfaces.
It is [documented in the egress gateway guide](https://docs.cilium.io/en/stable/gettingstarted/egress-gateway/#compatibility-with-cloud-environments), but we never mention that it's only meant for ENI mode. We should fix that as it can break things on other setups (e.g., known to cause issues on GKE). | non_priority | document that install egress gateway routes is specific to eks the install egress gateway routes flag aka the egressgateway installroutes helm value is needed on eks s eni mode to steer traffic through the proper interfaces it is but we never mention that it s only meant for eni mode we should fix that as it can break things on other setups e g known to cause issues on gke | 0 |
162,075 | 6,147,325,582 | IssuesEvent | 2017-06-27 15:30:32 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | closed | ERROR in Maximum call stack size exceeded : Feature module circular dependency | freq2: medium priority: 2 (required) type: bug | <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [X] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
<!--
Output from: `ng --version`.
If nothing, output from: `node --version` and `npm --version`.
Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra?)
-->
> @angular/cli: 1.0.0-rc.2
node: 6.10.0
os: linux x64
@angular/common: 2.4.10
@angular/compiler: 2.4.10
@angular/core: 2.4.10
@angular/forms: 2.4.10
@angular/http: 2.4.10
@angular/platform-browser: 2.4.10
@angular/platform-browser-dynamic: 2.4.10
@angular/router: 3.4.10
@angular/cli: 1.0.0-rc.2
@angular/compiler-cli: 2.4.10
### Repro steps.
<!--
Simple steps to reproduce this bug.
Please include: commands run, packages added, related code changes.
A link to a sample repo would help too.
-->
**STEPS TO REPRODUCE**
See git repo
https://github.com/vaibhavbparikh/circular-routes
### The log given by the failure.
<!-- Normally this include a stack trace and some more information. -->
> ** NG Live Development Server is running on http://localhost:4200 **
Hash: f669143e24ec08ff9586
Time: 23223ms
chunk {0} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 232 kB {4} [initial] [rendered]
chunk {1} main.bundle.js, main.bundle.js.map (main) 332 kB {3} [initial] [rendered]
chunk {2} styles.bundle.js, styles.bundle.js.map (styles) 304 kB {4} [initial] [rendered]
chunk {3} vendor.bundle.js, vendor.bundle.js.map (vendor) 5.64 MB [initial] [rendered]
chunk {4} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
ERROR in Maximum call stack size exceeded
webpack: Failed to compile.
However when I do any changes in file and save it again, angular-cli compiles it again and it runs fine then.
### Desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
This behaviour shall not create cyclic dependency, there should be mechanism to detect this and break.
Either it shall not compile at all or it shall not throw error at first compile.
** USE CASE **
> I have 3 Feature modules in the application. Each Feature modules contains 3 or more sub feature modules with their own components and specially **routes** with router module.
> Now consider below scenario.
> I have modules as below.
>
> - Feature 1
> - Feature1Sub Feature 1
> - Feature1Sub Feature 2
> - Feature 2
> - Feature2Sub Feature 1
> - Feature2Sub Feature 2
> - Feature 3
> - Feature3Sub Feature 1
> - Feature3Sub Feature 2
>
> Now Consider below flow.
> I am loading Feature 1 -> Feature1Sub Feature 1. I want to load Feature2Sub Feature 1 directly from Feature1Sub Feature 1. Without loading whole Feature 2 module. Also, I need to maintain navigation flow with back button.
> So I will need to write routes and load that module with loadchildren feature in module of Feature1Sub Feature 1.
> Same way I want to load Feature1Sub Feature 1 directly from Feature2SubFeature 1 without loading whole Feature 2 module. So again I will need to write routes in Feature1Sub Feature 1 module.
>
As a complex and large application, there might be need to create flow where two or more modules needs each other with routing.
### Mention any other details that might be useful.
<!-- Please include a link to the repo if this is related to an OSS project. -->
| 1.0 | ERROR in Maximum call stack size exceeded : Feature module circular dependency - <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [X] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
<!--
Output from: `ng --version`.
If nothing, output from: `node --version` and `npm --version`.
Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra?)
-->
> @angular/cli: 1.0.0-rc.2
node: 6.10.0
os: linux x64
@angular/common: 2.4.10
@angular/compiler: 2.4.10
@angular/core: 2.4.10
@angular/forms: 2.4.10
@angular/http: 2.4.10
@angular/platform-browser: 2.4.10
@angular/platform-browser-dynamic: 2.4.10
@angular/router: 3.4.10
@angular/cli: 1.0.0-rc.2
@angular/compiler-cli: 2.4.10
### Repro steps.
<!--
Simple steps to reproduce this bug.
Please include: commands run, packages added, related code changes.
A link to a sample repo would help too.
-->
**STEPS TO REPRODUCE**
See git repo
https://github.com/vaibhavbparikh/circular-routes
### The log given by the failure.
<!-- Normally this include a stack trace and some more information. -->
> ** NG Live Development Server is running on http://localhost:4200 **
Hash: f669143e24ec08ff9586
Time: 23223ms
chunk {0} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 232 kB {4} [initial] [rendered]
chunk {1} main.bundle.js, main.bundle.js.map (main) 332 kB {3} [initial] [rendered]
chunk {2} styles.bundle.js, styles.bundle.js.map (styles) 304 kB {4} [initial] [rendered]
chunk {3} vendor.bundle.js, vendor.bundle.js.map (vendor) 5.64 MB [initial] [rendered]
chunk {4} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
ERROR in Maximum call stack size exceeded
webpack: Failed to compile.
However when I do any changes in file and save it again, angular-cli compiles it again and it runs fine then.
### Desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
This behaviour shall not create cyclic dependency, there should be mechanism to detect this and break.
Either it shall not compile at all or it shall not throw error at first compile.
** USE CASE **
> I have 3 Feature modules in the application. Each Feature modules contains 3 or more sub feature modules with their own components and specially **routes** with router module.
> Now consider below scenario.
> I have modules as below.
>
> - Feature 1
> - Feature1Sub Feature 1
> - Feature1Sub Feature 2
> - Feature 2
> - Feature2Sub Feature 1
> - Feature2Sub Feature 2
> - Feature 3
> - Feature3Sub Feature 1
> - Feature3Sub Feature 2
>
> Now Consider below flow.
> I am loading Feature 1 -> Feature1Sub Feature 1. I want to load Feature2Sub Feature 1 directly from Feature1Sub Feature 1. Without loading whole Feature 2 module. Also, I need to maintain navigation flow with back button.
> So I will need to write routes and load that module with loadchildren feature in module of Feature1Sub Feature 1.
> Same way I want to load Feature1Sub Feature 1 directly from Feature2SubFeature 1 without loading whole Feature 2 module. So again I will need to write routes in Feature1Sub Feature 1 module.
>
As a complex and large application, there might be need to create flow where two or more modules needs each other with routing.
### Mention any other details that might be useful.
<!-- Please include a link to the repo if this is related to an OSS project. -->
| priority | error in maximum call stack size exceeded feature module circular dependency if you don t fill out the following information your issue might be closed without investigating bug report or feature request mark with an x bug report please search issues before submitting feature request versions output from ng version if nothing output from node version and npm version windows linux incl distribution macos el capitan sierra angular cli rc node os linux angular common angular compiler angular core angular forms angular http angular platform browser angular platform browser dynamic angular router angular cli rc angular compiler cli repro steps simple steps to reproduce this bug please include commands run packages added related code changes a link to a sample repo would help too steps to reproduce see git repo the log given by the failure ng live development server is running on hash time chunk polyfills bundle js polyfills bundle js map polyfills kb chunk main bundle js main bundle js map main kb chunk styles bundle js styles bundle js map styles kb chunk vendor bundle js vendor bundle js map vendor mb chunk inline bundle js inline bundle js map inline bytes error in maximum call stack size exceeded webpack failed to compile however when i do any changes in file and save it again angular cli compiles it again and it runs fine then desired functionality what would like to see implemented what is the usecase this behaviour shall not create cyclic dependency there should be mechanism to detect this and break either it shall not compile at all or it shall not throw error at first compile use case i have feature modules in the application each feature modules contains or more sub feature modules with their own components and specially routes with router module now consider below scenario i have modules as below feature feature feature feature feature feature feature feature feature now consider below flow i am loading feature feature i want to load feature directly from feature without loading whole feature module also i need to maintain navigation flow with back button so i will need to write routes and load that module with loadchildren feature in module of feature same way i want to load feature directly from without loading whole feature module so again i will need to write routes in feature module as a complex and large application there might be need to create flow where two or more modules needs each other with routing mention any other details that might be useful | 1 |
8,944 | 4,364,784,070 | IssuesEvent | 2016-08-03 08:23:11 | CartoDB/cartodb | https://api.github.com/repos/CartoDB/cartodb | closed | Implement endpoint to store the builder onboarding seen status | backend Builder | For the new builder onboarding (see https://github.com/CartoDB/cartodb/issues/8887) we need a new endpoint to store the seen status of the screen. This way, if the user checks the `Never show me this message` option we'll set certain flag to true in the user model, and we won't disturb he or her again. | 1.0 | Implement endpoint to store the builder onboarding seen status - For the new builder onboarding (see https://github.com/CartoDB/cartodb/issues/8887) we need a new endpoint to store the seen status of the screen. This way, if the user checks the `Never show me this message` option we'll set certain flag to true in the user model, and we won't disturb he or her again. | non_priority | implement endpoint to store the builder onboarding seen status for the new builder onboarding see we need a new endpoint to store the seen status of the screen this way if the user checks the never show me this message option we ll set certain flag to true in the user model and we won t disturb he or her again | 0 |
15,600 | 3,476,591,721 | IssuesEvent | 2015-12-27 02:56:38 | PolarisSS13/Polaris | https://api.github.com/repos/PolarisSS13/Polaris | closed | Z8 Carbine Grenade Launcher | BS12 Bug Needs Retesting | Doesn't work, the grenade option won't shoot the grenade, and acts as the semi-automatic option. | 1.0 | Z8 Carbine Grenade Launcher - Doesn't work, the grenade option won't shoot the grenade, and acts as the semi-automatic option. | non_priority | carbine grenade launcher doesn t work the grenade option won t shoot the grenade and acts as the semi automatic option | 0 |
5,897 | 2,580,630,259 | IssuesEvent | 2015-02-13 18:57:51 | GoogleCloudPlatform/kubernetes | https://api.github.com/repos/GoogleCloudPlatform/kubernetes | closed | Return 201 when objects are created | api/cluster area/usability priority/P2 status/help-wanted team/UX | Currently we return 200 when a new object is created, spec:
> If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource
Extracted from #1782 | 1.0 | Return 201 when objects are created - Currently we return 200 when a new object is created, spec:
> If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource
Extracted from #1782 | priority | return when objects are created currently we return when a new object is created spec if a resource has been created on the origin server the response should be created and contain an entity which describes the status of the request and refers to the new resource extracted from | 1 |
257,241 | 27,561,820,778 | IssuesEvent | 2023-03-07 22:48:27 | samqws-marketing/coursera_naptime | https://api.github.com/repos/samqws-marketing/coursera_naptime | closed | CVE-2016-10540 (High) detected in npm-2.11.2.jar - autoclosed | Mend: dependency security vulnerability | ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-2.11.2.jar</b></p></summary>
<p>WebJar for npm</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/org.webjars/npm/jars/npm-2.11.2.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- :x: **npm-2.11.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10540">https://nvd.nist.gov/vuln/detail/CVE-2016-10540</a></p>
<p>Release Date: 2018-05-31</p>
<p>Fix Resolution: Pvc.Browserify - 0.0.1.1;JetBrains.Rider.Frontend4 - 203.0.20201014.202610-eap04;JetBrains.Rider.Frontend5 - 203.0.20201006.200056-eap03,213.0.20211008.154703-eap03;Bridge.AWS - 0.3.30.36;tslint - 3.11.0;MIDIator.WebClient - 1.0.105;BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;ng-grid - 2.0.4;minimatch - 3.0.2;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Triarc.Web.Build - 1.3.0;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Mustache.Reports.Data - 1.2.2;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 2.0.0,1.0.2;AntData.ORM - 1.2.9;ApiExplorer.HelpPage - 1.0.0-alpha3;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;BumperLane.Public.Api.Client - 0.23.35.214-prerelease</p>
</p>
</details>
<p></p>
| True | CVE-2016-10540 (High) detected in npm-2.11.2.jar - autoclosed - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-2.11.2.jar</b></p></summary>
<p>WebJar for npm</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/org.webjars/npm/jars/npm-2.11.2.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- :x: **npm-2.11.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10540">https://nvd.nist.gov/vuln/detail/CVE-2016-10540</a></p>
<p>Release Date: 2018-05-31</p>
<p>Fix Resolution: Pvc.Browserify - 0.0.1.1;JetBrains.Rider.Frontend4 - 203.0.20201014.202610-eap04;JetBrains.Rider.Frontend5 - 203.0.20201006.200056-eap03,213.0.20211008.154703-eap03;Bridge.AWS - 0.3.30.36;tslint - 3.11.0;MIDIator.WebClient - 1.0.105;BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;ng-grid - 2.0.4;minimatch - 3.0.2;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Triarc.Web.Build - 1.3.0;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Mustache.Reports.Data - 1.2.2;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 2.0.0,1.0.2;AntData.ORM - 1.2.9;ApiExplorer.HelpPage - 1.0.0-alpha3;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;BumperLane.Public.Api.Client - 0.23.35.214-prerelease</p>
</p>
</details>
<p></p>
| non_priority | cve high detected in npm jar autoclosed cve high severity vulnerability vulnerable library npm jar webjar for npm library home page a href path to vulnerable library home wss scanner cache org webjars npm jars npm jar dependency hierarchy sbt plugin jar root library sbt js engine jar npm jar x npm jar vulnerable library found in head commit a href found in base branch master vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pvc browserify jetbrains rider jetbrains rider bridge aws tslint midiator webclient bumperlane public service contracts prerelease ng grid minimatch virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease triarc web build virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease mustache reports data virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser antdata orm apiexplorer helppage sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease bumperlane public api client prerelease | 0 |
109,943 | 23,846,434,993 | IssuesEvent | 2022-09-06 14:21:22 | arturo-lang/arturo | https://api.github.com/repos/arturo-lang/arturo | closed | [VM/bytecode] replace zippy with already existing functions? | enhancement vm todo performance bytecode | [VM/bytecode] replace zippy with already existing functions?
we could somehow use some of the existing miniz functions, to avoid the extra dependency
https://github.com/arturo-lang/arturo/blob/f194d4353079ce45693f443e9c9ec43f62c2fc74/src/vm/bytecode.nim#L16
```text
import streams
# TODO(VM/bytecode) replace zippy with already existing functions?
# we could somehow use some of the existing miniz functions, to avoid the extra dependency
# labels: vm, bytecode, enhancement, performance
import extras/zippy
import os
# when not defined(NOUNZIP):
# import extras/miniz
import helpers/bytes as bytesHelper
```
165f61c38ddee10401677fe8ce026b9c7c93c435 | 1.0 | [VM/bytecode] replace zippy with already existing functions? - [VM/bytecode] replace zippy with already existing functions?
we could somehow use some of the existing miniz functions, to avoid the extra dependency
https://github.com/arturo-lang/arturo/blob/f194d4353079ce45693f443e9c9ec43f62c2fc74/src/vm/bytecode.nim#L16
```text
import streams
# TODO(VM/bytecode) replace zippy with already existing functions?
# we could somehow use some of the existing miniz functions, to avoid the extra dependency
# labels: vm, bytecode, enhancement, performance
import extras/zippy
import os
# when not defined(NOUNZIP):
# import extras/miniz
import helpers/bytes as bytesHelper
```
165f61c38ddee10401677fe8ce026b9c7c93c435 | non_priority | replace zippy with already existing functions replace zippy with already existing functions we could somehow use some of the existing miniz functions to avoid the extra dependency text import streams todo vm bytecode replace zippy with already existing functions we could somehow use some of the existing miniz functions to avoid the extra dependency labels vm bytecode enhancement performance import extras zippy import os when not defined nounzip import extras miniz import helpers bytes as byteshelper | 0 |
688,897 | 23,599,309,968 | IssuesEvent | 2022-08-23 23:06:07 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Create CalcSpatialInertiaAboutPoint() to mimic related functions in multibody_plant.h and multibody_tree.h. | type: feature request priority: medium component: multibody plant type: usability release notes: feature | The implementation of a helper function ComputeCompositeInertia() arose during work in an Anzu [PR](#https://reviewable.shared-services.aws.tri.global/reviews/robotics/anzu/9323#-).
The documentation and signature in that Anzu PR is:
/// Returns `M_CFo_F` spatial inertia of `C` (composition of `bodies`)
/// expressed in frame `F` about its origin `Fo`.
drake::multibody::SpatialInertia<double> ComputeCompositeInertia(
const drake::multibody::MultibodyPlant<double>& plant,
const drake::systems::Context<double>& context,
const drake::multibody::Frame<double>& F,
const std::vector<const drake::multibody::Body<double>*>& bodies);
Since this useful feature is worthy of a more general function (not limited to SpatialInertia<double>), it is worth creating a function that more closely matches similar functions, e.g., CalcTotalMass() and CalcSpatialMomentumInWorldAboutPoint(), which are in MultibodyPlant and MultibodyTree, e.g., create a function similar to:
in multibody_plant.h
// Returns M_SFo_F, the spatial inertia of a system S of bodies about the
// origin of a frame F, expressed in frame F.
// @param[in] context Contains the state of the model.
// @param[in] frame_F specifies the about-point Fo (frame_F's origin) and
// the expressed-in frame for the returned spatial inertia.
// @param[in] body_indexes Array of selected bodies. This method does not
// distinguish between welded bodies, joint-connected bodies,
// floating bodies, the world body, or repeated bodies.
// @throws std::exception if body_indexes contains an invalid BodyIndex.
SpatialInertia<T> CalcBodiesSpatialInertiaAboutPoint(
const systems::Context<T>& context,
const Frame<T>& frame_F,
const std::vector<BodyIndex>& body_indexes) const;
Related methods are listed below.
From multibody_tree.h
CalcBodiesSpatialMomentumInWorldAboutWo
SpatialMomentum<T> CalcBodiesSpatialMomentumInWorldAboutWo(
const systems::Context<T>& context,
const std::vector<BodyIndex>& body_indexes) const;
From multibody_plant.h
SpatialMomentum<T> CalcSpatialMomentumInWorldAboutPoint(
const systems::Context<T>& context,
const Vector3<T>& p_WoP_W) const {
From multibody_plant.h
SpatialMomentum<T> CalcSpatialMomentumInWorldAboutPoint(
const systems::Context<T>& context,
const std::vector<ModelInstanceIndex>& model_instances,
const Vector3<T>& p_WoP_W) const { | 1.0 | Create CalcSpatialInertiaAboutPoint() to mimic related functions in multibody_plant.h and multibody_tree.h. - The implementation of a helper function ComputeCompositeInertia() arose during work in an Anzu [PR](#https://reviewable.shared-services.aws.tri.global/reviews/robotics/anzu/9323#-).
The documentation and signature in that Anzu PR is:
/// Returns `M_CFo_F` spatial inertia of `C` (composition of `bodies`)
/// expressed in frame `F` about its origin `Fo`.
drake::multibody::SpatialInertia<double> ComputeCompositeInertia(
const drake::multibody::MultibodyPlant<double>& plant,
const drake::systems::Context<double>& context,
const drake::multibody::Frame<double>& F,
const std::vector<const drake::multibody::Body<double>*>& bodies);
Since this useful feature is worthy of a more general function (not limited to SpatialInertia<double>), it is worth creating a function that more closely matches similar functions, e.g., CalcTotalMass() and CalcSpatialMomentumInWorldAboutPoint(), which are in MultibodyPlant and MultibodyTree, e.g., create a function similar to:
in multibody_plant.h
// Returns M_SFo_F, the spatial inertia of a system S of bodies about the
// origin of a frame F, expressed in frame F.
// @param[in] context Contains the state of the model.
// @param[in] frame_F specifies the about-point Fo (frame_F's origin) and
// the expressed-in frame for the returned spatial inertia.
// @param[in] body_indexes Array of selected bodies. This method does not
// distinguish between welded bodies, joint-connected bodies,
// floating bodies, the world body, or repeated bodies.
// @throws std::exception if body_indexes contains an invalid BodyIndex.
SpatialInertia<T> CalcBodiesSpatialInertiaAboutPoint(
const systems::Context<T>& context,
const Frame<T>& frame_F,
const std::vector<BodyIndex>& body_indexes) const;
Related methods are listed below.
From multibody_tree.h
CalcBodiesSpatialMomentumInWorldAboutWo
SpatialMomentum<T> CalcBodiesSpatialMomentumInWorldAboutWo(
const systems::Context<T>& context,
const std::vector<BodyIndex>& body_indexes) const;
From multibody_plant.h
SpatialMomentum<T> CalcSpatialMomentumInWorldAboutPoint(
const systems::Context<T>& context,
const Vector3<T>& p_WoP_W) const {
From multibody_plant.h
SpatialMomentum<T> CalcSpatialMomentumInWorldAboutPoint(
const systems::Context<T>& context,
const std::vector<ModelInstanceIndex>& model_instances,
const Vector3<T>& p_WoP_W) const { | priority | create calcspatialinertiaaboutpoint to mimic related functions in multibody plant h and multibody tree h the implementation of a helper function computecompositeinertia arose during work in an anzu the documentation and signature in that anzu pr is returns m cfo f spatial inertia of c composition of bodies expressed in frame f about its origin fo drake multibody spatialinertia computecompositeinertia const drake multibody multibodyplant plant const drake systems context context const drake multibody frame f const std vector bodies since this useful feature is worthy of a more general function not limited to spatialinertia it is worth creating a function that more closely matches similar functions e g calctotalmass and calcspatialmomentuminworldaboutpoint which are in multibodyplant and multibodytree e g create a function similar to in multibody plant h returns m sfo f the spatial inertia of a system s of bodies about the origin of a frame f expressed in frame f param context contains the state of the model param frame f specifies the about point fo frame f s origin and the expressed in frame for the returned spatial inertia param body indexes array of selected bodies this method does not distinguish between welded bodies joint connected bodies floating bodies the world body or repeated bodies throws std exception if body indexes contains an invalid bodyindex spatialinertia calcbodiesspatialinertiaaboutpoint const systems context context const frame frame f const std vector body indexes const related methods are listed below from multibody tree h calcbodiesspatialmomentuminworldaboutwo spatialmomentum calcbodiesspatialmomentuminworldaboutwo const systems context context const std vector body indexes const from multibody plant h spatialmomentum calcspatialmomentuminworldaboutpoint const systems context context const p wop w const from multibody plant h spatialmomentum calcspatialmomentuminworldaboutpoint const systems context context const std vector model instances const p wop w const | 1 |
262,099 | 27,850,891,933 | IssuesEvent | 2023-03-20 18:36:21 | jgeraigery/Java-Demo | https://api.github.com/repos/jgeraigery/Java-Demo | opened | CVE-2016-1000031 (Critical) detected in commons-fileupload-1.3.1.jar | Mend: dependency security vulnerability | ## CVE-2016-1000031 - Critical Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-fileupload-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Java-Demo/commit/c6fb981b15217a4d0cfc36ccf725182fdf783ef1">c6fb981b15217a4d0cfc36ccf725182fdf783ef1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons FileUpload before 1.3.3 DiskFileItem File Manipulation Remote Code Execution
<p>Publish Date: 2016-10-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1000031>CVE-2016-1000031</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000031">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000031</a></p>
<p>Release Date: 2016-10-25</p>
<p>Fix Resolution (commons-fileupload:commons-fileupload): 1.3.3</p>
<p>Direct dependency fix Resolution (org.owasp.esapi:esapi): 2.2.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2016-1000031 (Critical) detected in commons-fileupload-1.3.1.jar - ## CVE-2016-1000031 - Critical Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-fileupload-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Java-Demo/commit/c6fb981b15217a4d0cfc36ccf725182fdf783ef1">c6fb981b15217a4d0cfc36ccf725182fdf783ef1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons FileUpload before 1.3.3 DiskFileItem File Manipulation Remote Code Execution
<p>Publish Date: 2016-10-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1000031>CVE-2016-1000031</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000031">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000031</a></p>
<p>Release Date: 2016-10-25</p>
<p>Fix Resolution (commons-fileupload:commons-fileupload): 1.3.3</p>
<p>Direct dependency fix Resolution (org.owasp.esapi:esapi): 2.2.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_priority | cve critical detected in commons fileupload jar cve critical severity vulnerability vulnerable library commons fileupload jar the apache commons fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository commons fileupload commons fileupload commons fileupload jar dependency hierarchy esapi jar root library x commons fileupload jar vulnerable library found in head commit a href found in base branch main vulnerability details apache commons fileupload before diskfileitem file manipulation remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons fileupload commons fileupload direct dependency fix resolution org owasp esapi esapi rescue worker helmet automatic remediation is available for this issue | 0 |
18,501 | 10,131,208,736 | IssuesEvent | 2019-08-01 18:55:45 | microsoft/PTVS | https://api.github.com/repos/microsoft/PTVS | closed | Tooltips in package manager search results slows down VS, becomes unusable | area:Environments bug tenet:Performance | Open package manager UI
Type pytest in search bar
Hover the mouse over the results
Result: at some point, everything becomes so slow, it is no longer usable. | True | Tooltips in package manager search results slows down VS, becomes unusable - Open package manager UI
Type pytest in search bar
Hover the mouse over the results
Result: at some point, everything becomes so slow, it is no longer usable. | non_priority | tooltips in package manager search results slows down vs becomes unusable open package manager ui type pytest in search bar hover the mouse over the results result at some point everything becomes so slow it is no longer usable | 0 |
206,809 | 7,121,407,882 | IssuesEvent | 2018-01-19 07:35:03 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | API Development: wrong breakdown of bullet points in documentation | 2.2.0 Formatting Priority/Highest Severity/Critical Type/Docs | In [1] there is wrong breakdown of bullet points in documentation. Please see the below image. Please correct these
[1] https://docs.wso2.com/display/AM2xx/Ownership%2C+permission+and+collaborative+API+development+-+Documentation

| 1.0 | API Development: wrong breakdown of bullet points in documentation - In [1] there is wrong breakdown of bullet points in documentation. Please see the below image. Please correct these
[1] https://docs.wso2.com/display/AM2xx/Ownership%2C+permission+and+collaborative+API+development+-+Documentation

| priority | api development wrong breakdown of bullet points in documentation in there is wrong breakdown of bullet points in documentation please see the below image please correct these | 1 |
182,888 | 14,169,802,603 | IssuesEvent | 2020-11-12 13:44:13 | kubernetes-sigs/azurefile-csi-driver | https://api.github.com/repos/kubernetes-sigs/azurefile-csi-driver | closed | print output integration test result | kind/test lifecycle/rotten size/S | **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
this PR(https://github.com/kubernetes-sigs/azurefile-csi-driver/pull/208) does not work
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 1.0 | print output integration test result - **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
this PR(https://github.com/kubernetes-sigs/azurefile-csi-driver/pull/208) does not work
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| non_priority | print output integration test result is your feature request related to a problem why is this needed describe the solution you d like in detail this pr does not work describe alternatives you ve considered additional context | 0 |
25,600 | 2,683,859,020 | IssuesEvent | 2015-03-28 11:52:43 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | ConEmu 20091224 И снова AutoTabs | 2–5 stars bug imported Priority-Medium | _From [cca...@gmail.com](https://code.google.com/u/115607388065392232035/) on January 20, 2010 10:06:22_
Версия ОС: WinXP sp3 rus x86
Версия FAR: Far2 b1345 x86
автотабы включены. при вызове редактора или вьюера полоска с табами
появляется с примерно секундной задержкой. при выходе с такой же задержкой
убирается.
если так и замысленно, то это очень анноит. а если по недосмотру, то
просьба исправить.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=161_ | 1.0 | ConEmu 20091224 И снова AutoTabs - _From [cca...@gmail.com](https://code.google.com/u/115607388065392232035/) on January 20, 2010 10:06:22_
Версия ОС: WinXP sp3 rus x86
Версия FAR: Far2 b1345 x86
автотабы включены. при вызове редактора или вьюера полоска с табами
появляется с примерно секундной задержкой. при выходе с такой же задержкой
убирается.
если так и замысленно, то это очень анноит. а если по недосмотру, то
просьба исправить.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=161_ | priority | conemu и снова autotabs from on january версия ос winxp rus версия far автотабы включены при вызове редактора или вьюера полоска с табами появляется с примерно секундной задержкой при выходе с такой же задержкой убирается если так и замысленно то это очень анноит а если по недосмотру то просьба исправить original issue | 1 |
100,387 | 12,519,533,937 | IssuesEvent | 2020-06-03 14:34:46 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | Create Initial Prototype for "Search for my VA representative" | design vsa vsa-ebenefits | ## Goal
Based on existing product documentation, high-level thumbnail workflows, and our experience with other VA Search work, we need to start thinking about how Search workflows will look on regular screens for "Search for a representative."
## Tasks
- [ ] Build out the necessary screens for Representative search that show its first happy path based on an authenticated Veteran
- [ ] Build out the remaining screens for other log in states, users.
## Acceptance Criteria
- [ ] An initial prototype has been created and shared with the team.
| 1.0 | Create Initial Prototype for "Search for my VA representative" - ## Goal
Based on existing product documentation, high-level thumbnail workflows, and our experience with other VA Search work, we need to start thinking about how Search workflows will look on regular screens for "Search for a representative."
## Tasks
- [ ] Build out the necessary screens for Representative search that show its first happy path based on an authenticated Veteran
- [ ] Build out the remaining screens for other log in states, users.
## Acceptance Criteria
- [ ] An initial prototype has been created and shared with the team.
| non_priority | create initial prototype for search for my va representative goal based on existing product documentation high level thumbnail workflows and our experience with other va search work we need to start thinking about how search workflows will look on regular screens for search for a representative tasks build out the necessary screens for representative search that show its first happy path based on an authenticated veteran build out the remaining screens for other log in states users acceptance criteria an initial prototype has been created and shared with the team | 0 |
700,723 | 24,071,682,660 | IssuesEvent | 2022-09-18 08:52:41 | bats-core/bats-core | https://api.github.com/repos/bats-core/bats-core | opened | Test 73 "CTRL-C aborts and fails the current test" hangs when run on a recent Debian sid system | Type: Bug Priority: NeedsTriage | **Describe the bug**
In versions 1.8.0, test "73 CTRL-C aborts and fails the current test" hangs when run on a recent Debian sid system.
Test 73 gets stuck without producing any output. Killing the test with CTRL-C makes the test suit starting the abort procedure, but then the test process gets stuck in `R`unning state. Sending SIGKILL to the test process unblocks the test suite abort procedure.
> ```
> ok 71 Parallel mode works on MacOS with over subscription (issue #433)
> ok 72 Failure in free code (see #399)
> [nothing shown on the terminal; the test process spins using 100% CPU until CTRL-C is pressed]
> ^Cnot ok 73 CTRL-C aborts and fails the current test
> # (in test file test/bats.bats, line 767)
> # `wait $SUBPROCESS_PID && return 1' failed with status 130
>
> # Received SIGINT, aborting ...
>
> ^C^C^C^C^C^C^C^C^C^C^C^C# bats warning: Executed 73 instead of expected 274 tests
> ```
**To Reproduce**
Steps to reproduce the behavior:
1. `docker run --rm -it -v /tmp/d debian:sid`
2. `apt update && apt install git`
3. `cd /tmp && git clone https://github.com/bats-core/bats-core && cd bats-core`
4. `mkdir -p ~/.parallel && touch ~/.parallel/will-cite`
6. `bin/bats --tap test`
**Expected behavior**
Both tests should run successfully.
**Environment (please complete the following information):**
- Bats Version 1.8.0
- OS: Debian sid
- Bash version: 5.2.0(1)-rc2 | 1.0 | Test 73 "CTRL-C aborts and fails the current test" hangs when run on a recent Debian sid system - **Describe the bug**
In versions 1.8.0, test "73 CTRL-C aborts and fails the current test" hangs when run on a recent Debian sid system.
Test 73 gets stuck without producing any output. Killing the test with CTRL-C makes the test suit starting the abort procedure, but then the test process gets stuck in `R`unning state. Sending SIGKILL to the test process unblocks the test suite abort procedure.
> ```
> ok 71 Parallel mode works on MacOS with over subscription (issue #433)
> ok 72 Failure in free code (see #399)
> [nothing shown on the terminal; the test process spins using 100% CPU until CTRL-C is pressed]
> ^Cnot ok 73 CTRL-C aborts and fails the current test
> # (in test file test/bats.bats, line 767)
> # `wait $SUBPROCESS_PID && return 1' failed with status 130
>
> # Received SIGINT, aborting ...
>
> ^C^C^C^C^C^C^C^C^C^C^C^C# bats warning: Executed 73 instead of expected 274 tests
> ```
**To Reproduce**
Steps to reproduce the behavior:
1. `docker run --rm -it -v /tmp/d debian:sid`
2. `apt update && apt install git`
3. `cd /tmp && git clone https://github.com/bats-core/bats-core && cd bats-core`
4. `mkdir -p ~/.parallel && touch ~/.parallel/will-cite`
6. `bin/bats --tap test`
**Expected behavior**
Both tests should run successfully.
**Environment (please complete the following information):**
- Bats Version 1.8.0
- OS: Debian sid
- Bash version: 5.2.0(1)-rc2 | priority | test ctrl c aborts and fails the current test hangs when run on a recent debian sid system describe the bug in versions test ctrl c aborts and fails the current test hangs when run on a recent debian sid system test gets stuck without producing any output killing the test with ctrl c makes the test suit starting the abort procedure but then the test process gets stuck in r unning state sending sigkill to the test process unblocks the test suite abort procedure ok parallel mode works on macos with over subscription issue ok failure in free code see cnot ok ctrl c aborts and fails the current test in test file test bats bats line wait subprocess pid return failed with status received sigint aborting c c c c c c c c c c c c bats warning executed instead of expected tests to reproduce steps to reproduce the behavior docker run rm it v tmp d debian sid apt update apt install git cd tmp git clone cd bats core mkdir p parallel touch parallel will cite bin bats tap test expected behavior both tests should run successfully environment please complete the following information bats version os debian sid bash version | 1 |
432,075 | 12,488,856,444 | IssuesEvent | 2020-05-31 16:07:27 | kammanz/raptors-app | https://api.github.com/repos/kammanz/raptors-app | opened | File spacing should be two spaces | High Priority refactor | Right now all the file spacing is 4. It's too much space, the standard is 2. We'll have to change it for all the files. Also you should change ur vsCode settings, such that when u press `tab` it only moves the cursor two spaces.
This task has a `High Priority` and fixed asap. | 1.0 | File spacing should be two spaces - Right now all the file spacing is 4. It's too much space, the standard is 2. We'll have to change it for all the files. Also you should change ur vsCode settings, such that when u press `tab` it only moves the cursor two spaces.
This task has a `High Priority` and fixed asap. | priority | file spacing should be two spaces right now all the file spacing is it s too much space the standard is we ll have to change it for all the files also you should change ur vscode settings such that when u press tab it only moves the cursor two spaces this task has a high priority and fixed asap | 1 |
164,554 | 25,985,997,804 | IssuesEvent | 2022-12-20 00:13:42 | BreadGood-22/frontend | https://api.github.com/repos/BreadGood-22/frontend | closed | [design] 게시글 작성 페이지 UI 구현 | design | ## ⭐ 주요 기능 <!-- 구현할 기능(목표)에 대한 간략한 설명 -->
게시글 작성(post upload) 페이지 UI 구현
## 📋 진행 사항
- [x] 게시글 작성 페이지 섹션
- [x] 이미지 파일 업로드 섹션
## 📄 참고 사항
| 1.0 | [design] 게시글 작성 페이지 UI 구현 - ## ⭐ 주요 기능 <!-- 구현할 기능(목표)에 대한 간략한 설명 -->
게시글 작성(post upload) 페이지 UI 구현
## 📋 진행 사항
- [x] 게시글 작성 페이지 섹션
- [x] 이미지 파일 업로드 섹션
## 📄 참고 사항
| non_priority | 게시글 작성 페이지 ui 구현 ⭐ 주요 기능 게시글 작성 post upload 페이지 ui 구현 📋 진행 사항 게시글 작성 페이지 섹션 이미지 파일 업로드 섹션 📄 참고 사항 | 0 |
20,225 | 10,478,000,740 | IssuesEvent | 2019-09-23 22:25:23 | bcgov/EDUC-UMU | https://api.github.com/repos/bcgov/EDUC-UMU | opened | Form Validation | enhancement security fix | **Is your feature request related to a problem? Please describe.**
It is currently possible to input bogus info directly to the database. Some level of validation must take place to ensure the database in flooded with bad data.
**Describe the solution you'd like**
Validation for GUIDS, IDIRS, and other known quantities must be validated before being added to the database.
| True | Form Validation - **Is your feature request related to a problem? Please describe.**
It is currently possible to input bogus info directly to the database. Some level of validation must take place to ensure the database in flooded with bad data.
**Describe the solution you'd like**
Validation for GUIDS, IDIRS, and other known quantities must be validated before being added to the database.
| non_priority | form validation is your feature request related to a problem please describe it is currently possible to input bogus info directly to the database some level of validation must take place to ensure the database in flooded with bad data describe the solution you d like validation for guids idirs and other known quantities must be validated before being added to the database | 0 |
47,047 | 2,971,684,916 | IssuesEvent | 2015-07-14 08:48:21 | clementine-player/Clementine | https://api.github.com/repos/clementine-player/Clementine | closed | Next|Prev playlist keyboard shortcuts | enhancement imported Priority-Medium | _From [tomas.pr...@gmail.com](https://code.google.com/u/103404869143991179936/) on May 10, 2012 13:51:42_
Hi,
I'd like to know whether it's polssible to add keyboard shortcuts options for looping over playlists. I would love to have such multimedia key on keyboard.
Thanks for awesome player btw ;)
_Original issue: http://code.google.com/p/clementine-player/issues/detail?id=2932_ | 1.0 | Next|Prev playlist keyboard shortcuts - _From [tomas.pr...@gmail.com](https://code.google.com/u/103404869143991179936/) on May 10, 2012 13:51:42_
Hi,
I'd like to know whether it's polssible to add keyboard shortcuts options for looping over playlists. I would love to have such multimedia key on keyboard.
Thanks for awesome player btw ;)
_Original issue: http://code.google.com/p/clementine-player/issues/detail?id=2932_ | priority | next prev playlist keyboard shortcuts from on may hi i d like to know whether it s polssible to add keyboard shortcuts options for looping over playlists i would love to have such multimedia key on keyboard thanks for awesome player btw original issue | 1 |
25,624 | 12,264,032,340 | IssuesEvent | 2020-05-07 02:55:40 | Azure/azure-rest-api-specs | https://api.github.com/repos/Azure/azure-rest-api-specs | closed | Some issues in resource graph swaggers | Resource Graph Service Attention | Found some issues in the swagger
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/resourcegraph/resource-manager/Microsoft.ResourceGraph/stable/2019-04-01
1. In line#292 and #411, no ‘type’ defined for the data property.
2. ‘tags’ fields are missing for the API, and the format of ‘operationId’ is incorrect, which should be <resource>-<verb>. See https://github.com/Azure/azure-rest-api-specs/blob/master/specification/databricks/resource-manager/Microsoft.Databricks/stable/2018-04-01/databricks.json#L39 for an example
[CG] Yes these are issues, we are fixing it in the new preview API being published @ https://github.com/Azure/azure-rest-api-specs/pull/9025/files#diff-200c67f8c6c835534cba4cc76b97cffc
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/resourcegraph/resource-manager/Microsoft.ResourceGraph/preview/2018-09-01-preview
1. From my testing, the property location is required for the put API, while it is not in the swagger. This is a block issue for me.
2. ‘tags’ fields are missing for the API
[CG] Yes we will be fixing it while publishing the next version for the swagger.
Just found some other issues. Following is the raw HTTP response when getting a specific query. Please see my comments start with //.
```
{
"properties": {
"resultKind": "UnKnown",
"timeModified": "2020-04-17T10:47:44.3813391Z",
"query": "where isnotnull(tags['Prod']) and properties.extensions[0].Name == 'dockers'"
},
"type": "microsoft.resourcegraph/queries",
"location": "global",
"subscriptionId": "9e223dbe-3399-4e19-88eb-0975f02ac87f", // subscriptionId is missing in the swagger
"resourceGroup": "testgroup", // resourceGroup is missing in the swagger
"name": "sq01",
"id": "/subscriptions/9e223dbe-3399-4e19-88eb-0975f02ac87f/resourceGroups/testgroup/providers/microsoft.resourcegraph/queries/sq01",
"etag": "\"00003200-0000-1800-0000-5e9989500000\"" // it should be etag, but in swagger it is eTag. BTW, I think the value of "etag" should be "00003200-0000-1800-0000-5e9989500000"
}
```
| 1.0 | Some issues in resource graph swaggers - Found some issues in the swagger
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/resourcegraph/resource-manager/Microsoft.ResourceGraph/stable/2019-04-01
1. In line#292 and #411, no ‘type’ defined for the data property.
2. ‘tags’ fields are missing for the API, and the format of ‘operationId’ is incorrect, which should be <resource>-<verb>. See https://github.com/Azure/azure-rest-api-specs/blob/master/specification/databricks/resource-manager/Microsoft.Databricks/stable/2018-04-01/databricks.json#L39 for an example
[CG] Yes these are issues, we are fixing it in the new preview API being published @ https://github.com/Azure/azure-rest-api-specs/pull/9025/files#diff-200c67f8c6c835534cba4cc76b97cffc
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/resourcegraph/resource-manager/Microsoft.ResourceGraph/preview/2018-09-01-preview
1. From my testing, the property location is required for the put API, while it is not in the swagger. This is a block issue for me.
2. ‘tags’ fields are missing for the API
[CG] Yes we will be fixing it while publishing the next version for the swagger.
Just found some other issues. Following is the raw HTTP response when getting a specific query. Please see my comments start with //.
```
{
"properties": {
"resultKind": "UnKnown",
"timeModified": "2020-04-17T10:47:44.3813391Z",
"query": "where isnotnull(tags['Prod']) and properties.extensions[0].Name == 'dockers'"
},
"type": "microsoft.resourcegraph/queries",
"location": "global",
"subscriptionId": "9e223dbe-3399-4e19-88eb-0975f02ac87f", // subscriptionId is missing in the swagger
"resourceGroup": "testgroup", // resourceGroup is missing in the swagger
"name": "sq01",
"id": "/subscriptions/9e223dbe-3399-4e19-88eb-0975f02ac87f/resourceGroups/testgroup/providers/microsoft.resourcegraph/queries/sq01",
"etag": "\"00003200-0000-1800-0000-5e9989500000\"" // it should be etag, but in swagger it is eTag. BTW, I think the value of "etag" should be "00003200-0000-1800-0000-5e9989500000"
}
```
| non_priority | some issues in resource graph swaggers found some issues in the swagger in line and no ‘type’ defined for the data property ‘tags’ fields are missing for the api and the format of ‘operationid’ is incorrect which should be see for an example yes these are issues we are fixing it in the new preview api being published from my testing the property location is required for the put api while it is not in the swagger this is a block issue for me ‘tags’ fields are missing for the api yes we will be fixing it while publishing the next version for the swagger just found some other issues following is the raw http response when getting a specific query please see my comments start with properties resultkind unknown timemodified query where isnotnull tags and properties extensions name dockers type microsoft resourcegraph queries location global subscriptionid subscriptionid is missing in the swagger resourcegroup testgroup resourcegroup is missing in the swagger name id subscriptions resourcegroups testgroup providers microsoft resourcegraph queries etag it should be etag but in swagger it is etag btw i think the value of etag should be | 0 |
13,921 | 23,981,685,003 | IssuesEvent | 2022-09-13 15:30:37 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Terraform not updating certain folders | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Terraform updates work fine for some paths but not others, e.g. in the following minimal reproduction:
https://github.com/jpealing-fiscaltec/renovate-terraform
Terraform in the `Infrastructure/Foo/environments/prod` folder is updated, but the otherwise identical `Infrastructure/Foo/environments/test` folder is not touched.
Logs indicate that the `Infrastructure/Foo/environments/test` folder is not even considered for updates.
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Matched 1 file(s) for manager terraform: Infrastructure/Foo/environments/prod/main.tf
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | 1.0 | Terraform not updating certain folders - ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Terraform updates work fine for some paths but not others, e.g. in the following minimal reproduction:
https://github.com/jpealing-fiscaltec/renovate-terraform
Terraform in the `Infrastructure/Foo/environments/prod` folder is updated, but the otherwise identical `Infrastructure/Foo/environments/test` folder is not touched.
Logs indicate that the `Infrastructure/Foo/environments/test` folder is not even considered for updates.
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Matched 1 file(s) for manager terraform: Infrastructure/Foo/environments/prod/main.tf
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | non_priority | terraform not updating certain folders how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response if you re self hosting renovate select which platform you are using no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug terraform updates work fine for some paths but not others e g in the following minimal reproduction terraform in the infrastructure foo environments prod folder is updated but the otherwise identical infrastructure foo environments test folder is not touched logs indicate that the infrastructure foo environments test folder is not even considered for updates relevant debug logs logs debug matched file s for manager terraform infrastructure foo environments prod main tf have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description | 0 |
394,428 | 27,029,047,038 | IssuesEvent | 2023-02-12 00:26:06 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | Debugging documentation outdated | OCIS-Fastlane Topic:Documentation Status:Stale Priority:p3-medium | ## Describe the bug
https://github.com/owncloud/ocis/blob/master/docs/ocis/development/debugging.md seems to be somewhat outdated. ocis nowadays doesn't fork separate processes per service any more. It should be possible to just attach the debugger to the main process. | 1.0 | Debugging documentation outdated - ## Describe the bug
https://github.com/owncloud/ocis/blob/master/docs/ocis/development/debugging.md seems to be somewhat outdated. ocis nowadays doesn't fork separate processes per service any more. It should be possible to just attach the debugger to the main process. | non_priority | debugging documentation outdated describe the bug seems to be somewhat outdated ocis nowadays doesn t fork separate processes per service any more it should be possible to just attach the debugger to the main process | 0 |
85,090 | 24,507,328,314 | IssuesEvent | 2022-10-10 17:38:18 | facebookincubator/velox | https://api.github.com/repos/facebookincubator/velox | reopened | Enable ASAN/TSAN/UBSAN/Unit tests for Linux & MacOS | enhancement build stale | Currently Unit tests arent run for MacOs builds ; this needs to be fixed. We should also enable ASAN/TSAN/UBSAN for Linux and MacOS. | 1.0 | Enable ASAN/TSAN/UBSAN/Unit tests for Linux & MacOS - Currently Unit tests arent run for MacOs builds ; this needs to be fixed. We should also enable ASAN/TSAN/UBSAN for Linux and MacOS. | non_priority | enable asan tsan ubsan unit tests for linux macos currently unit tests arent run for macos builds this needs to be fixed we should also enable asan tsan ubsan for linux and macos | 0 |
65,262 | 8,795,837,795 | IssuesEvent | 2018-12-22 20:52:05 | biotope/biotope-element | https://api.github.com/repos/biotope/biotope-element | closed | Add usage with the resource loader to the documentation | documentation | Currently, we have no working example with the resource loader.
The user needs to figure out how to register the component by themselfes | 1.0 | Add usage with the resource loader to the documentation - Currently, we have no working example with the resource loader.
The user needs to figure out how to register the component by themselfes | non_priority | add usage with the resource loader to the documentation currently we have no working example with the resource loader the user needs to figure out how to register the component by themselfes | 0 |
806,133 | 29,802,674,087 | IssuesEvent | 2023-06-16 09:16:51 | neobotix/neo_docking2 | https://api.github.com/repos/neobotix/neo_docking2 | opened | [Manual docking] - remove the recovery behaviors | high priority | enforce it if possible for a safe navigation into the docking station. | 1.0 | [Manual docking] - remove the recovery behaviors - enforce it if possible for a safe navigation into the docking station. | priority | remove the recovery behaviors enforce it if possible for a safe navigation into the docking station | 1 |
214,749 | 24,105,767,179 | IssuesEvent | 2022-09-20 07:19:04 | ws-sultan/Invalid-project-token | https://api.github.com/repos/ws-sultan/Invalid-project-token | closed | CVE-2021-33203 (Medium) detected in Django-3.2-py3-none-any.whl - autoclosed | security vulnerability | ## CVE-2021-33203 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-3.2-py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a8/9b/fe94c509e514f6c227308e81076506eb9d67f2bfb8061ce5cdfbde0432e3/Django-3.2-py3-none-any.whl">https://files.pythonhosted.org/packages/a8/9b/fe94c509e514f6c227308e81076506eb9d67f2bfb8061ce5cdfbde0432e3/Django-3.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-3.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/Invalid-project-token/commit/402778bc36ca66ce8243abca710000a22e03fb9f">402778bc36ca66ce8243abca710000a22e03fb9f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Django before 2.2.24, 3.x before 3.1.12, and 3.2.x before 3.2.4 has a potential directory traversal via django.contrib.admindocs. Staff members could use the TemplateDetailView view to check the existence of arbitrary files. Additionally, if (and only if) the default admindocs templates have been customized by application developers to also show file contents, then not only the existence but also the file contents would have been exposed. In other words, there is directory traversal outside of the template root directories.
<p>Publish Date: 2021-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33203>CVE-2021-33203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.djangoproject.com/en/3.2/releases/security/">https://docs.djangoproject.com/en/3.2/releases/security/</a></p>
<p>Release Date: 2021-06-08</p>
<p>Fix Resolution: Django - 2.2.24, 3.1.12, 3.2.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33203 (Medium) detected in Django-3.2-py3-none-any.whl - autoclosed - ## CVE-2021-33203 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-3.2-py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a8/9b/fe94c509e514f6c227308e81076506eb9d67f2bfb8061ce5cdfbde0432e3/Django-3.2-py3-none-any.whl">https://files.pythonhosted.org/packages/a8/9b/fe94c509e514f6c227308e81076506eb9d67f2bfb8061ce5cdfbde0432e3/Django-3.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-3.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/Invalid-project-token/commit/402778bc36ca66ce8243abca710000a22e03fb9f">402778bc36ca66ce8243abca710000a22e03fb9f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Django before 2.2.24, 3.x before 3.1.12, and 3.2.x before 3.2.4 has a potential directory traversal via django.contrib.admindocs. Staff members could use the TemplateDetailView view to check the existence of arbitrary files. Additionally, if (and only if) the default admindocs templates have been customized by application developers to also show file contents, then not only the existence but also the file contents would have been exposed. In other words, there is directory traversal outside of the template root directories.
<p>Publish Date: 2021-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33203>CVE-2021-33203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.djangoproject.com/en/3.2/releases/security/">https://docs.djangoproject.com/en/3.2/releases/security/</a></p>
<p>Release Date: 2021-06-08</p>
<p>Fix Resolution: Django - 2.2.24, 3.1.12, 3.2.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in django none any whl autoclosed cve medium severity vulnerability vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x django none any whl vulnerable library found in head commit a href found in base branch main vulnerability details django before x before and x before has a potential directory traversal via django contrib admindocs staff members could use the templatedetailview view to check the existence of arbitrary files additionally if and only if the default admindocs templates have been customized by application developers to also show file contents then not only the existence but also the file contents would have been exposed in other words there is directory traversal outside of the template root directories publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution django step up your open source security game with mend | 0 |
25,992 | 4,540,475,918 | IssuesEvent | 2016-09-09 14:45:54 | ariya/phantomjs | https://api.github.com/repos/ariya/phantomjs | closed | Phantomjs 1.8.0 Crash | old.Priority-Medium old.Status-New old.Type-Defect | _**[richard....@gmail.com](http://code.google.com/u/105799783715052437755/) commented:**_
> Which version of PhantomJS are you using? 1.8.0
>
> <b>What steps will reproduce the problem?</b>
1. Ran some casper js tests
>
> Sometimes, phantomjs will crash but mot of the time it won't. It seems to be randomly so it 's not easy to understand on my side.
>
> Which operating system are you using? Linux 3.2.0-27-generic #43-Ubuntu
>
>
> Did you use binary PhantomJS or did you compile it from source? binary PhantomJs from your googlecode repository
>
> <b>Please provide any additional information below.</b>
Here are info from dump file (even if it seems to be useless for you) :
> Operating system: Linux
> 0.0.0 Linux 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64
> CPU: amd64
> family 16 model 8 stepping 1
> 4 CPUs
>
> Crash reason: SIGSEGV
> Crash address: 0x38
>
> Thread 0 (crashed)
> 0 0x469968
> rbx = 0x00007fff35a46380 r12 = 0x00007fff35a46380
> r13 = 0x0000000002fe6450 r14 = 0x0000000002d04d4c
> r15 = 0x0000000002fe6450 rip = 0x0000000000469968
> rsp = 0x00007fff35a462f0 rbp = 0x0000000003d1a420
> Found by: given as instruction pointer in context
> 1 libc-2.15.so + 0x82fb4
> rip = 0x00007fc00605ffb5 rsp = 0x00007fff35a46330
> Found by: stack scanning
>
> Thread 1
> 0 libc-2.15.so + 0xbf83d
> rbx = 0x00007fc005254cf0 r12 = 0x0000000000000001
> r13 = 0x00000000000000ff r14 = 0x0000000000000001
> r15 = 0x00000000026bf3c0 rip = 0x00007fc00609c83d
> rsp = 0x00007fc005254c40 rbp = 0x00000000ffffffff
> Found by: given as instruction pointer in context
> 1 libc-2.15.so + 0xbf6db
> rip = 0x00007fc00609c6dc rsp = 0x00007fc005254c50
> Found by: stack scanning
> 2 libc-2.15.so + 0x109f7
> rip = 0x00007fc005fed9f8 rsp = 0x00007fc005254c70
> Found by: stack scanning
> 3 ld-2.15.so + 0xa522
> rip = 0x00007fc0076c2523 rsp = 0x00007fc005254cc8
> Found by: stack scanning
> 4 libc-2.15.so + 0xbf848
> rip = 0x00007fc00609c849 rsp = 0x00007fc005254cd8
> Found by: stack scanning
> 5 libpthread-2.15.so + 0xfcaf
> rip = 0x00007fc006bbfcb0 rsp = 0x00007fc005254ce8
> Found by: stack scanning
> 6 libpthread-2.15.so + 0xb103
> rip = 0x00007fc006bbb104 rsp = 0x00007fc005254d90
> Found by: stack scanning
> 7 libc-2.15.so + 0xe54f
> rip = 0x00007fc005feb550 rsp = 0x00007fc005254d98
> Found by: stack scanning
> 8 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254dc0
> Found by: stack scanning
> 9 ld-2.15.so + 0x15234
> rip = 0x00007fc0076cd235 rsp = 0x00007fc005254dd0
> Found by: stack scanning
> 10 libpthread-2.15.so + 0xe9f9
> rip = 0x00007fc006bbe9fa rsp = 0x00007fc005254de0
> Found by: stack scanning
> 11 libpthread-2.15.so + 0xbe17
> rip = 0x00007fc006bbbe18 rsp = 0x00007fc005254df0
> Found by: stack scanning
> 12 libc-2.15.so + 0x977f
> rip = 0x00007fc005fe6780 rsp = 0x00007fc005254e28
> Found by: stack scanning
> 13 ld-2.15.so + 0x15234
> rip = 0x00007fc0076cd235 rsp = 0x00007fc005254e60
> Found by: stack scanning
> 14 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254e80
> Found by: stack scanning
> 15 libpthread-2.15.so + 0x7e99
> rip = 0x00007fc006bb7e9a rsp = 0x00007fc005254eb0
> Found by: stack scanning
> 16 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254f58
> Found by: stack scanning
> 17 libc-2.15.so + 0xf3cbc
> rip = 0x00007fc0060d0cbd rsp = 0x00007fc005254fc0
> Found by: stack scanning
>
> Thread 2
> 0 libc-2.15.so + 0xed023
> rbx = 0x00007fc000000d30 r12 = 0x0000000000000000
> r13 = 0x00007fc000000a98 r14 = 0x0000000000000008
> r15 = 0x00007fc000000a98 rip = 0x00007fc0060ca023
> rsp = 0x00007fc00493bb90 rbp = 0x00007fc000000fc8
> Found by: given as instruction pointer in context
> 1 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fc00493bbb0
> Found by: stack scanning
> 2 libpthread-2.15.so + 0x2d2f
> rip = 0x00007fc006bb2d30 rsp = 0x00007fc00493bbf8
> Found by: stack scanning
> 3 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fc00493bc70
> Found by: stack scanning
>
> Thread 3
> 0 libpthread-2.15.so + 0xbd84
> rbx = 0x00000000031c7780 r12 = 0xffffffffffffffff
> r13 = 0x004189374bc6a7ef r14 = 0x0000000000000000
> r15 = 0x00000000031c77a8 rip = 0x00007fc006bbbd84
> rsp = 0x00007fbfbea12d50 rbp = 0x00000000031c7710
> Found by: given as instruction pointer in context
> 1 linux-gate.so + 0xa1a
> rip = 0x00007fff35b38a1b rsp = 0x00007fbfbea12dc0
> Found by: stack scanning
> 2 libpthread-2.15.so + 0x7e99
> rip = 0x00007fc006bb7e9a rsp = 0x00007fbfbea12eb0
> Found by: stack scanning
>
> Thread 4
> 0 libc-2.15.so + 0xed023
> rbx = 0x00007fbfb0000d30 r12 = 0x00007fbfbe171d50
> r13 = 0x00007fbfb0000a98 r14 = 0x000000000000005e
> r15 = 0x00000000004b3fa2 rip = 0x00007fc0060ca023
> rsp = 0x00007fbfbe171b90 rbp = 0x00007fbfb0000fc8
> Found by: given as instruction pointer in context
> 1 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fbfbe171bb0
> Found by: stack scanning
> 2 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fbfbe171c70
> Found by: stack scanning
>
> Thread 5
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005d
> r13 = 0x00007fbfb7ffeda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb7ffed20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 6
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005f
> r13 = 0x00007fbfbd32eda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfbd32ed20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 7
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x0000000000000061
> r13 = 0x00007fbfbcb2dda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfbcb2dd20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 8
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005b
> r13 = 0x00007fbfb77fdda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb77fdd20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 9
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x0000000000000063
> r13 = 0x00007fbfb6925da0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb6925d20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Loaded modules:
> 0x7fbfb4355000 - 0x7fbfb437cfff LiberationSans-Italic.ttf ??? (main)
> 0x7fbfb5999000 - 0x7fbfb59b3fff LiberationMono-Regular.ttf ???
> 0x7fbfb60f8000 - 0x7fbfb6109fff n019004l.pfb ???
> 0x7fbfbc05e000 - 0x7fbfbc07ffff LiberationSans-Regular.ttf ???
> 0x7fbfbc080000 - 0x7fbfbc0a1fff LiberationSans-Bold.ttf ???
> 0x7fbfbc0a2000 - 0x7fbfbc146fff DejaVuSans-Bold.ttf ???
> 0x7fbfbc147000 - 0x7fbfbc1f6fff DejaVuSans.ttf ???
> 0x7fbfbc2b5000 - 0x7fbfbc2c8fff n019003l.pfb ???
> 0x7fbfbd330000 - 0x7fbfbd537fff libnss_dns-2.15.so ???
> 0x7fbfbd549000 - 0x7fbfbd755fff libnss_files-2.15.so ???
> 0x7fbfbd756000 - 0x7fbfbd96ffff libresolv-2.15.so ???
> 0x7fc005266000 - 0x7fc0054c0fff libssl.so.1.0.0 ???
> 0x7fc0054c2000 - 0x7fc005885fff libcrypto.so.1.0.0 ???
> 0x7fc005894000 - 0x7fc00589afff gconv-modules.cache ???
> 0x7fc00589b000 - 0x7fc0058bffff libc.mo ???
> 0x7fc0058c0000 - 0x7fc0058c8fff 945677eb7aeaf62f1d50efc3fb3ec7d8-le64.cache-3 ???
> 0x7fc0058c9000 - 0x7fc005b96fff locale-archive ???
> 0x7fc005b9c000 - 0x7fc005dc5fff libexpat.so.1.5.2 ???
> 0x7fc005dc6000 - 0x7fc005fdcfff libz.so.1.2.3.4 ???
> 0x7fc005fdd000 - 0x7fc006396fff libc-2.15.so ???
> 0x7fc00639d000 - 0x7fc0065b2fff libgcc_s.so.1 ???
> 0x7fc0065b3000 - 0x7fc0068aefff libm-2.15.so ???
> 0x7fc0068af000 - 0x7fc006b99fff libstdc++.so.6.0.16 ???
> 0x7fc006bb0000 - 0x7fc006dc8fff libpthread-2.15.so ???
> 0x7fc006dcd000 - 0x7fc006fd4fff librt-2.15.so ???
> 0x7fc006fd5000 - 0x7fc0071d8fff libdl-2.15.so ???
> 0x7fc0071da000 - 0x7fc00740ffff libfontconfig.so.1.4.4 ???
> 0x7fc007410000 - 0x7fc0076abfff libfreetype.so.6.8.0 ???
> 0x7fc0076ad000 - 0x7fc0076b0fff 6d41288fd70b0be22e8c3a91e032eec0-le64.cache-3 ???
> 0x7fc0076b1000 - 0x7fc0076b5fff 3047814df9a2f067bd2d96a2b9c36e5a-le64.cache-3 ???
> 0x7fc0076b8000 - 0x7fc0076d9fff ld-2.15.so ???
> 0x7fff35b38000 - 0x7fff35b38fff linux-gate.so ???
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #1005](http://code.google.com/p/phantomjs/issues/detail?id=1005).
:star2: **2** people had starred this issue at the time of migration. | 1.0 | Phantomjs 1.8.0 Crash - _**[richard....@gmail.com](http://code.google.com/u/105799783715052437755/) commented:**_
> Which version of PhantomJS are you using? 1.8.0
>
> <b>What steps will reproduce the problem?</b>
1. Ran some casper js tests
>
> Sometimes, phantomjs will crash but mot of the time it won't. It seems to be randomly so it 's not easy to understand on my side.
>
> Which operating system are you using? Linux 3.2.0-27-generic #43-Ubuntu
>
>
> Did you use binary PhantomJS or did you compile it from source? binary PhantomJs from your googlecode repository
>
> <b>Please provide any additional information below.</b>
Here are info from dump file (even if it seems to be useless for you) :
> Operating system: Linux
> 0.0.0 Linux 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64
> CPU: amd64
> family 16 model 8 stepping 1
> 4 CPUs
>
> Crash reason: SIGSEGV
> Crash address: 0x38
>
> Thread 0 (crashed)
> 0 0x469968
> rbx = 0x00007fff35a46380 r12 = 0x00007fff35a46380
> r13 = 0x0000000002fe6450 r14 = 0x0000000002d04d4c
> r15 = 0x0000000002fe6450 rip = 0x0000000000469968
> rsp = 0x00007fff35a462f0 rbp = 0x0000000003d1a420
> Found by: given as instruction pointer in context
> 1 libc-2.15.so + 0x82fb4
> rip = 0x00007fc00605ffb5 rsp = 0x00007fff35a46330
> Found by: stack scanning
>
> Thread 1
> 0 libc-2.15.so + 0xbf83d
> rbx = 0x00007fc005254cf0 r12 = 0x0000000000000001
> r13 = 0x00000000000000ff r14 = 0x0000000000000001
> r15 = 0x00000000026bf3c0 rip = 0x00007fc00609c83d
> rsp = 0x00007fc005254c40 rbp = 0x00000000ffffffff
> Found by: given as instruction pointer in context
> 1 libc-2.15.so + 0xbf6db
> rip = 0x00007fc00609c6dc rsp = 0x00007fc005254c50
> Found by: stack scanning
> 2 libc-2.15.so + 0x109f7
> rip = 0x00007fc005fed9f8 rsp = 0x00007fc005254c70
> Found by: stack scanning
> 3 ld-2.15.so + 0xa522
> rip = 0x00007fc0076c2523 rsp = 0x00007fc005254cc8
> Found by: stack scanning
> 4 libc-2.15.so + 0xbf848
> rip = 0x00007fc00609c849 rsp = 0x00007fc005254cd8
> Found by: stack scanning
> 5 libpthread-2.15.so + 0xfcaf
> rip = 0x00007fc006bbfcb0 rsp = 0x00007fc005254ce8
> Found by: stack scanning
> 6 libpthread-2.15.so + 0xb103
> rip = 0x00007fc006bbb104 rsp = 0x00007fc005254d90
> Found by: stack scanning
> 7 libc-2.15.so + 0xe54f
> rip = 0x00007fc005feb550 rsp = 0x00007fc005254d98
> Found by: stack scanning
> 8 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254dc0
> Found by: stack scanning
> 9 ld-2.15.so + 0x15234
> rip = 0x00007fc0076cd235 rsp = 0x00007fc005254dd0
> Found by: stack scanning
> 10 libpthread-2.15.so + 0xe9f9
> rip = 0x00007fc006bbe9fa rsp = 0x00007fc005254de0
> Found by: stack scanning
> 11 libpthread-2.15.so + 0xbe17
> rip = 0x00007fc006bbbe18 rsp = 0x00007fc005254df0
> Found by: stack scanning
> 12 libc-2.15.so + 0x977f
> rip = 0x00007fc005fe6780 rsp = 0x00007fc005254e28
> Found by: stack scanning
> 13 ld-2.15.so + 0x15234
> rip = 0x00007fc0076cd235 rsp = 0x00007fc005254e60
> Found by: stack scanning
> 14 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254e80
> Found by: stack scanning
> 15 libpthread-2.15.so + 0x7e99
> rip = 0x00007fc006bb7e9a rsp = 0x00007fc005254eb0
> Found by: stack scanning
> 16 libpthread-2.15.so + 0x1187f
> rip = 0x00007fc006bc1880 rsp = 0x00007fc005254f58
> Found by: stack scanning
> 17 libc-2.15.so + 0xf3cbc
> rip = 0x00007fc0060d0cbd rsp = 0x00007fc005254fc0
> Found by: stack scanning
>
> Thread 2
> 0 libc-2.15.so + 0xed023
> rbx = 0x00007fc000000d30 r12 = 0x0000000000000000
> r13 = 0x00007fc000000a98 r14 = 0x0000000000000008
> r15 = 0x00007fc000000a98 rip = 0x00007fc0060ca023
> rsp = 0x00007fc00493bb90 rbp = 0x00007fc000000fc8
> Found by: given as instruction pointer in context
> 1 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fc00493bbb0
> Found by: stack scanning
> 2 libpthread-2.15.so + 0x2d2f
> rip = 0x00007fc006bb2d30 rsp = 0x00007fc00493bbf8
> Found by: stack scanning
> 3 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fc00493bc70
> Found by: stack scanning
>
> Thread 3
> 0 libpthread-2.15.so + 0xbd84
> rbx = 0x00000000031c7780 r12 = 0xffffffffffffffff
> r13 = 0x004189374bc6a7ef r14 = 0x0000000000000000
> r15 = 0x00000000031c77a8 rip = 0x00007fc006bbbd84
> rsp = 0x00007fbfbea12d50 rbp = 0x00000000031c7710
> Found by: given as instruction pointer in context
> 1 linux-gate.so + 0xa1a
> rip = 0x00007fff35b38a1b rsp = 0x00007fbfbea12dc0
> Found by: stack scanning
> 2 libpthread-2.15.so + 0x7e99
> rip = 0x00007fc006bb7e9a rsp = 0x00007fbfbea12eb0
> Found by: stack scanning
>
> Thread 4
> 0 libc-2.15.so + 0xed023
> rbx = 0x00007fbfb0000d30 r12 = 0x00007fbfbe171d50
> r13 = 0x00007fbfb0000a98 r14 = 0x000000000000005e
> r15 = 0x00000000004b3fa2 rip = 0x00007fc0060ca023
> rsp = 0x00007fbfbe171b90 rbp = 0x00007fbfb0000fc8
> Found by: given as instruction pointer in context
> 1 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fbfbe171bb0
> Found by: stack scanning
> 2 librt-2.15.so + 0x415c
> rip = 0x00007fc006dd115d rsp = 0x00007fbfbe171c70
> Found by: stack scanning
>
> Thread 5
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005d
> r13 = 0x00007fbfb7ffeda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb7ffed20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 6
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005f
> r13 = 0x00007fbfbd32eda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfbd32ed20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 7
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x0000000000000061
> r13 = 0x00007fbfbcb2dda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfbcb2dd20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 8
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x000000000000005b
> r13 = 0x00007fbfb77fdda0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb77fdd20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Thread 9
> 0 libpthread-2.15.so + 0xc0fe
> rbx = 0x00007fbfb00053a0 r12 = 0x0000000000000063
> r13 = 0x00007fbfb6925da0 r14 = 0xffffffffffffff92
> r15 = 0x0000000000000000 rip = 0x00007fc006bbc0fe
> rsp = 0x00007fbfb6925d20 rbp = 0x00007fbfb0005348
> Found by: given as instruction pointer in context
>
> Loaded modules:
> 0x7fbfb4355000 - 0x7fbfb437cfff LiberationSans-Italic.ttf ??? (main)
> 0x7fbfb5999000 - 0x7fbfb59b3fff LiberationMono-Regular.ttf ???
> 0x7fbfb60f8000 - 0x7fbfb6109fff n019004l.pfb ???
> 0x7fbfbc05e000 - 0x7fbfbc07ffff LiberationSans-Regular.ttf ???
> 0x7fbfbc080000 - 0x7fbfbc0a1fff LiberationSans-Bold.ttf ???
> 0x7fbfbc0a2000 - 0x7fbfbc146fff DejaVuSans-Bold.ttf ???
> 0x7fbfbc147000 - 0x7fbfbc1f6fff DejaVuSans.ttf ???
> 0x7fbfbc2b5000 - 0x7fbfbc2c8fff n019003l.pfb ???
> 0x7fbfbd330000 - 0x7fbfbd537fff libnss_dns-2.15.so ???
> 0x7fbfbd549000 - 0x7fbfbd755fff libnss_files-2.15.so ???
> 0x7fbfbd756000 - 0x7fbfbd96ffff libresolv-2.15.so ???
> 0x7fc005266000 - 0x7fc0054c0fff libssl.so.1.0.0 ???
> 0x7fc0054c2000 - 0x7fc005885fff libcrypto.so.1.0.0 ???
> 0x7fc005894000 - 0x7fc00589afff gconv-modules.cache ???
> 0x7fc00589b000 - 0x7fc0058bffff libc.mo ???
> 0x7fc0058c0000 - 0x7fc0058c8fff 945677eb7aeaf62f1d50efc3fb3ec7d8-le64.cache-3 ???
> 0x7fc0058c9000 - 0x7fc005b96fff locale-archive ???
> 0x7fc005b9c000 - 0x7fc005dc5fff libexpat.so.1.5.2 ???
> 0x7fc005dc6000 - 0x7fc005fdcfff libz.so.1.2.3.4 ???
> 0x7fc005fdd000 - 0x7fc006396fff libc-2.15.so ???
> 0x7fc00639d000 - 0x7fc0065b2fff libgcc_s.so.1 ???
> 0x7fc0065b3000 - 0x7fc0068aefff libm-2.15.so ???
> 0x7fc0068af000 - 0x7fc006b99fff libstdc++.so.6.0.16 ???
> 0x7fc006bb0000 - 0x7fc006dc8fff libpthread-2.15.so ???
> 0x7fc006dcd000 - 0x7fc006fd4fff librt-2.15.so ???
> 0x7fc006fd5000 - 0x7fc0071d8fff libdl-2.15.so ???
> 0x7fc0071da000 - 0x7fc00740ffff libfontconfig.so.1.4.4 ???
> 0x7fc007410000 - 0x7fc0076abfff libfreetype.so.6.8.0 ???
> 0x7fc0076ad000 - 0x7fc0076b0fff 6d41288fd70b0be22e8c3a91e032eec0-le64.cache-3 ???
> 0x7fc0076b1000 - 0x7fc0076b5fff 3047814df9a2f067bd2d96a2b9c36e5a-le64.cache-3 ???
> 0x7fc0076b8000 - 0x7fc0076d9fff ld-2.15.so ???
> 0x7fff35b38000 - 0x7fff35b38fff linux-gate.so ???
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #1005](http://code.google.com/p/phantomjs/issues/detail?id=1005).
:star2: **2** people had starred this issue at the time of migration. | non_priority | phantomjs crash commented which version of phantomjs are you using what steps will reproduce the problem ran some casper js tests sometimes phantomjs will crash but mot of the time it won t it seems to be randomly so it s not easy to understand on my side which operating system are you using linux generic ubuntu did you use binary phantomjs or did you compile it from source binary phantomjs from your googlecode repository please provide any additional information below here are info from dump file even if it seems to be useless for you operating system linux linux generic ubuntu smp fri jul utc cpu family model stepping cpus crash reason sigsegv crash address thread crashed rbx rip rsp rbp found by given as instruction pointer in context libc so rip rsp found by stack scanning thread libc so rbx rip rsp rbp found by given as instruction pointer in context libc so rip rsp found by stack scanning libc so rip rsp found by stack scanning ld so rip rsp found by stack scanning libc so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libc so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning ld so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libc so rip rsp found by stack scanning ld so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning libc so rip rsp found by stack scanning thread libc so rbx rip rsp rbp found by given as instruction pointer in context librt so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning librt so rip rsp found by stack scanning thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context linux gate so rip rsp found by stack scanning libpthread so rip rsp found by stack scanning thread libc so rbx rip rsp rbp found by given as instruction pointer in context librt so rip rsp found by stack scanning librt so rip rsp found by stack scanning thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context thread libpthread so rbx rip rsp rbp found by given as instruction pointer in context loaded modules liberationsans italic ttf main liberationmono regular ttf pfb liberationsans regular ttf liberationsans bold ttf dejavusans bold ttf dejavusans ttf pfb libnss dns so libnss files so libresolv so libssl so libcrypto so gconv modules cache libc mo cache locale archive libexpat so libz so libc so libgcc s so libm so libstdc so libpthread so librt so libdl so libfontconfig so libfreetype so cache cache ld so linux gate so disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration | 0 |
430,428 | 30,182,901,968 | IssuesEvent | 2023-07-04 10:02:48 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Documentation plugin does not work properly | issue: bug severity: medium status: confirmed source: plugin:documentation | ## Bug report
### Required System information
- Node.js version: v18.7.0
- NPM version: 8.15.0
- Strapi version: 4.3.2
- Database: Postgres
- Operating system: MacOS
### Describe the bug
I wanted to create an API documentation of my content library so that I can generate the client automatically.
According to the documentation (https://docs.strapi.io/developer-docs/latest/plugins/documentation.html#settings) you can configure some things:
- Version (`$.info.version`) -> works
- Path for the API (`$.x-strapi-config.path`) -> does not work
- Definition der zu generierenden API (`$.x-strapi-config.pluginsForWhichToGenerateDoc`) -> does not work
(more I have not tested)
To test this I created the file `settings.json` under `/src/extensions/documentation/config/`. Since the version is overwritten, I strongly assume that the plugin has taken note of this file, but ignores the `x-strapi-config` section.
The content of the `settings.json`:
```json
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0"
},
"x-strapi-config": {
"showGeneratedFiles": true,
"path": "/api-docs",
"pluginsForWhichToGenerateDoc": [
"mycontenttest"
]
}
}
```
Moreover, the documentation of the plugin at https://docs.strapi.io/developer-docs/latest/plugins/documentation.html#swagger-json-and-openapi-json-files says that it is possible to download the `openapi.json`, or `swagger.json` files.
Concrete:
```
You need to grant the permission to call the controller action to the roles that can have access.
```
However, I have found absolutely no way to assign these rights.
I think the documentation either needs some love here or some stuff is just broken....
### Steps to reproduce the behavior
1. Install `documentation` plugin
3. Add the `settings.json` (replace for example `mycontenttest` with `upload` to have a simple test)
4. Open the documentation (with `/documentation` for `pluginsForWhichToGenerateDoc` and `/api-docs` to see 404)
5. See error
6. Try to download the `/documenation/openapi.json` or `/documenation/swagger.json`
### Expected behavior
The documentation is available under `/api-docs` and shows for example only `upload` API.
I can download (and maybe have some permissions to set under settings) the API definitions.
| 1.0 | Documentation plugin does not work properly - ## Bug report
### Required System information
- Node.js version: v18.7.0
- NPM version: 8.15.0
- Strapi version: 4.3.2
- Database: Postgres
- Operating system: MacOS
### Describe the bug
I wanted to create an API documentation of my content library so that I can generate the client automatically.
According to the documentation (https://docs.strapi.io/developer-docs/latest/plugins/documentation.html#settings) you can configure some things:
- Version (`$.info.version`) -> works
- Path for the API (`$.x-strapi-config.path`) -> does not work
- Definition der zu generierenden API (`$.x-strapi-config.pluginsForWhichToGenerateDoc`) -> does not work
(more I have not tested)
To test this I created the file `settings.json` under `/src/extensions/documentation/config/`. Since the version is overwritten, I strongly assume that the plugin has taken note of this file, but ignores the `x-strapi-config` section.
The content of the `settings.json`:
```json
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0"
},
"x-strapi-config": {
"showGeneratedFiles": true,
"path": "/api-docs",
"pluginsForWhichToGenerateDoc": [
"mycontenttest"
]
}
}
```
Moreover, the documentation of the plugin at https://docs.strapi.io/developer-docs/latest/plugins/documentation.html#swagger-json-and-openapi-json-files says that it is possible to download the `openapi.json`, or `swagger.json` files.
Concrete:
```
You need to grant the permission to call the controller action to the roles that can have access.
```
However, I have found absolutely no way to assign these rights.
I think the documentation either needs some love here or some stuff is just broken....
### Steps to reproduce the behavior
1. Install `documentation` plugin
3. Add the `settings.json` (replace for example `mycontenttest` with `upload` to have a simple test)
4. Open the documentation (with `/documentation` for `pluginsForWhichToGenerateDoc` and `/api-docs` to see 404)
5. See error
6. Try to download the `/documenation/openapi.json` or `/documenation/swagger.json`
### Expected behavior
The documentation is available under `/api-docs` and shows for example only `upload` API.
I can download (and maybe have some permissions to set under settings) the API definitions.
| non_priority | documentation plugin does not work properly bug report required system information node js version npm version strapi version database postgres operating system macos describe the bug i wanted to create an api documentation of my content library so that i can generate the client automatically according to the documentation you can configure some things version info version works path for the api x strapi config path does not work definition der zu generierenden api x strapi config pluginsforwhichtogeneratedoc does not work more i have not tested to test this i created the file settings json under src extensions documentation config since the version is overwritten i strongly assume that the plugin has taken note of this file but ignores the x strapi config section the content of the settings json json openapi info version x strapi config showgeneratedfiles true path api docs pluginsforwhichtogeneratedoc mycontenttest moreover the documentation of the plugin at says that it is possible to download the openapi json or swagger json files concrete you need to grant the permission to call the controller action to the roles that can have access however i have found absolutely no way to assign these rights i think the documentation either needs some love here or some stuff is just broken steps to reproduce the behavior install documentation plugin add the settings json replace for example mycontenttest with upload to have a simple test open the documentation with documentation for pluginsforwhichtogeneratedoc and api docs to see see error try to download the documenation openapi json or documenation swagger json expected behavior the documentation is available under api docs and shows for example only upload api i can download and maybe have some permissions to set under settings the api definitions | 0 |
389,080 | 11,497,139,760 | IssuesEvent | 2020-02-12 09:29:14 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Get rid of the 'Logout confirmation' page | 3.1.0 Login/Logout Priority/Normal Type/Improvement | It's nice that API-M 3.0.0 is using OIDC to login and also uses SSO to login to the developer portal if you are already logged into the API publisher. But when someone logs out, I think it's unusual to ask for a 'logout confirmation'. It's better to get rid of it. | 1.0 | Get rid of the 'Logout confirmation' page - It's nice that API-M 3.0.0 is using OIDC to login and also uses SSO to login to the developer portal if you are already logged into the API publisher. But when someone logs out, I think it's unusual to ask for a 'logout confirmation'. It's better to get rid of it. | priority | get rid of the logout confirmation page it s nice that api m is using oidc to login and also uses sso to login to the developer portal if you are already logged into the api publisher but when someone logs out i think it s unusual to ask for a logout confirmation it s better to get rid of it | 1 |
6,246 | 8,641,041,238 | IssuesEvent | 2018-11-24 13:36:21 | pingcap/tidb | https://api.github.com/repos/pingcap/tidb | closed | sql mode "NO_UNSIGNED_SUBTRACTION" support | type/compatibility | MySQL:
mysql> SET sql_mode = 'NO_UNSIGNED_SUBTRACTION';
mysql> SELECT CAST(0 AS UNSIGNED) - 1;
+-------------------------+
| CAST(0 AS UNSIGNED) - 1 |
+-------------------------+
| -1 |
+-------------------------+
TiDB:
mysql> SET sql_mode = '';
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT CAST(0 AS UNSIGNED) - 1;
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(cast(0 as unsigned) - 1)'
see [sql mode](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_no_unsigned_subtraction) | True | sql mode "NO_UNSIGNED_SUBTRACTION" support - MySQL:
mysql> SET sql_mode = 'NO_UNSIGNED_SUBTRACTION';
mysql> SELECT CAST(0 AS UNSIGNED) - 1;
+-------------------------+
| CAST(0 AS UNSIGNED) - 1 |
+-------------------------+
| -1 |
+-------------------------+
TiDB:
mysql> SET sql_mode = '';
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT CAST(0 AS UNSIGNED) - 1;
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(cast(0 as unsigned) - 1)'
see [sql mode](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_no_unsigned_subtraction) | non_priority | sql mode no unsigned subtraction support mysql mysql set sql mode no unsigned subtraction mysql select cast as unsigned cast as unsigned tidb mysql set sql mode query ok rows affected sec mysql select cast as unsigned error bigint unsigned value is out of range in cast as unsigned see | 0 |
322,802 | 23,923,578,829 | IssuesEvent | 2022-09-09 19:35:06 | Dyldog/luhman-obsidian-plugin | https://api.github.com/repos/Dyldog/luhman-obsidian-plugin | opened | [OTHER] Add CONTRIBUTING.md. | documentation question | With multiple devs beginning to work on this project it would be good to set expectations and provide any necessary guidance via a [CONTRIBUTING.md file](https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors).
We can discuss those expectations and guidance in this issue and track below what we want to include in the file.
## Expectations
-
## Guidance
- | 1.0 | [OTHER] Add CONTRIBUTING.md. - With multiple devs beginning to work on this project it would be good to set expectations and provide any necessary guidance via a [CONTRIBUTING.md file](https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors).
We can discuss those expectations and guidance in this issue and track below what we want to include in the file.
## Expectations
-
## Guidance
- | non_priority | add contributing md with multiple devs beginning to work on this project it would be good to set expectations and provide any necessary guidance via a we can discuss those expectations and guidance in this issue and track below what we want to include in the file expectations guidance | 0 |
66,461 | 8,926,439,725 | IssuesEvent | 2019-01-22 04:18:28 | typelead/eta | https://api.github.com/repos/typelead/eta | closed | Trouble getting FFI working with a static Java method... | documentation | ## Description
I wanted to time some functions as a bench mark, and ran into some snags getting it going. The problem arose because of errors in a couple of places:
1. [the Importing example under Modules in the Eta Cheatsheet](https://eta-lang.org/docs/cheatsheets/eta-cheatsheet) refers to the use of the Haskell package "Data.Time" which doesn't (yet) exist for Eta.
2. Trying to import the Java static method "currentTimeMillis" from java.lang.System didn't seem to work as described below...
## Expected Behavior
I can't see any problems and expected to see a Long printed showing the current time value.
## Actual Behavior
The program compiled, but the following error halted it at run time:
Exception in thread "main" java.lang.NoSuchMethodError: java.lang.System.currentTimeMillis(Leta/runtime/stg/Closure;)J
at main.Main.$wa(etabench.hs:29)
at main.Main.main1(etabench.hs)
at main.Main$main1.applyV(etabench.hs)
at eta.runtime.exception.Exception.catch_(Exception.java:132)
at main.Main.main5(etabench.hs)
at main.Main.DZCmain(etabench.hs:21)
at main.Main$DZCmain.applyV(etabench.hs:21)
at eta.runtime.stg.Closures$EvalLazyIO.enter(Closures.java:145)
at eta.runtime.stg.Capability.schedule(Capability.java:254)
at eta.runtime.stg.Capability.scheduleClosure(Capability.java:210)
at eta.runtime.Runtime.evalLazyIO(Runtime.java:372)
at eta.runtime.Runtime.main(Runtime.java:365)
at eta.main.main(Unknown Source)
## Possible Fix
Eventually, I discovered "getCPUTime" from the "System.CPUTime" module, which did what I needed in this case.
However, I would like to know why the static method failed to be imported correctly, as there may be a need for other static methods. If I did something wrong, then this should be better documented with better examples on how to import static methods...
## Steps to Reproduce
The following code exhibits the behaviour:
import Java
foreign import java unsafe "@static java.lang.System.currentTimeMillis"
currentTimeMillis :: () -> IO Int64
main :: IO ()
main = do
now <- currentTimeMillis()
print now
## Context
I was trying to time a benchmark function...
## Your Environment
Windows 10 64-bit updated to everything except the October 2018 update, with Eta installed and run through "etlas". | 1.0 | Trouble getting FFI working with a static Java method... - ## Description
I wanted to time some functions as a bench mark, and ran into some snags getting it going. The problem arose because of errors in a couple of places:
1. [the Importing example under Modules in the Eta Cheatsheet](https://eta-lang.org/docs/cheatsheets/eta-cheatsheet) refers to the use of the Haskell package "Data.Time" which doesn't (yet) exist for Eta.
2. Trying to import the Java static method "currentTimeMillis" from java.lang.System didn't seem to work as described below...
## Expected Behavior
I can't see any problems and expected to see a Long printed showing the current time value.
## Actual Behavior
The program compiled, but the following error halted it at run time:
Exception in thread "main" java.lang.NoSuchMethodError: java.lang.System.currentTimeMillis(Leta/runtime/stg/Closure;)J
at main.Main.$wa(etabench.hs:29)
at main.Main.main1(etabench.hs)
at main.Main$main1.applyV(etabench.hs)
at eta.runtime.exception.Exception.catch_(Exception.java:132)
at main.Main.main5(etabench.hs)
at main.Main.DZCmain(etabench.hs:21)
at main.Main$DZCmain.applyV(etabench.hs:21)
at eta.runtime.stg.Closures$EvalLazyIO.enter(Closures.java:145)
at eta.runtime.stg.Capability.schedule(Capability.java:254)
at eta.runtime.stg.Capability.scheduleClosure(Capability.java:210)
at eta.runtime.Runtime.evalLazyIO(Runtime.java:372)
at eta.runtime.Runtime.main(Runtime.java:365)
at eta.main.main(Unknown Source)
## Possible Fix
Eventually, I discovered "getCPUTime" from the "System.CPUTime" module, which did what I needed in this case.
However, I would like to know why the static method failed to be imported correctly, as there may be a need for other static methods. If I did something wrong, then this should be better documented with better examples on how to import static methods...
## Steps to Reproduce
The following code exhibits the behaviour:
import Java
foreign import java unsafe "@static java.lang.System.currentTimeMillis"
currentTimeMillis :: () -> IO Int64
main :: IO ()
main = do
now <- currentTimeMillis()
print now
## Context
I was trying to time a benchmark function...
## Your Environment
Windows 10 64-bit updated to everything except the October 2018 update, with Eta installed and run through "etlas". | non_priority | trouble getting ffi working with a static java method description i wanted to time some functions as a bench mark and ran into some snags getting it going the problem arose because of errors in a couple of places refers to the use of the haskell package data time which doesn t yet exist for eta trying to import the java static method currenttimemillis from java lang system didn t seem to work as described below expected behavior i can t see any problems and expected to see a long printed showing the current time value actual behavior the program compiled but the following error halted it at run time exception in thread main java lang nosuchmethoderror java lang system currenttimemillis leta runtime stg closure j at main main wa etabench hs at main main etabench hs at main main applyv etabench hs at eta runtime exception exception catch exception java at main main etabench hs at main main dzcmain etabench hs at main main dzcmain applyv etabench hs at eta runtime stg closures evallazyio enter closures java at eta runtime stg capability schedule capability java at eta runtime stg capability scheduleclosure capability java at eta runtime runtime evallazyio runtime java at eta runtime runtime main runtime java at eta main main unknown source possible fix eventually i discovered getcputime from the system cputime module which did what i needed in this case however i would like to know why the static method failed to be imported correctly as there may be a need for other static methods if i did something wrong then this should be better documented with better examples on how to import static methods steps to reproduce the following code exhibits the behaviour import java foreign import java unsafe static java lang system currenttimemillis currenttimemillis io main io main do now currenttimemillis print now context i was trying to time a benchmark function your environment windows bit updated to everything except the october update with eta installed and run through etlas | 0 |
11,705 | 17,841,144,977 | IssuesEvent | 2021-09-03 10:13:49 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | OCI support for helm dependencies | type:feature status:requirements priority-5-triage | ### What would you like Renovate to be able to do?
To update the dependencies that are being referenced from OCI registries(ex. Gitlab package registry).
Right now renovate fails with -
```
"err": {
"name": "UnsupportedProtocolError",
"message": "Unsupported protocol \"oci:\"",
```
### If you have any ideas on how this should be implemented, please tell us here.
Unfortunately, no. Although, I am not sure if this will be in scope of [this](https://github.com/renovatebot/renovate/issues/10659) issue.
### Is this a feature you are interested in implementing yourself?
No | 1.0 | OCI support for helm dependencies - ### What would you like Renovate to be able to do?
To update the dependencies that are being referenced from OCI registries(ex. Gitlab package registry).
Right now renovate fails with -
```
"err": {
"name": "UnsupportedProtocolError",
"message": "Unsupported protocol \"oci:\"",
```
### If you have any ideas on how this should be implemented, please tell us here.
Unfortunately, no. Although, I am not sure if this will be in scope of [this](https://github.com/renovatebot/renovate/issues/10659) issue.
### Is this a feature you are interested in implementing yourself?
No | non_priority | oci support for helm dependencies what would you like renovate to be able to do to update the dependencies that are being referenced from oci registries ex gitlab package registry right now renovate fails with err name unsupportedprotocolerror message unsupported protocol oci if you have any ideas on how this should be implemented please tell us here unfortunately no although i am not sure if this will be in scope of issue is this a feature you are interested in implementing yourself no | 0 |
41,134 | 16,624,761,774 | IssuesEvent | 2021-06-03 08:10:41 | EBISPOT/goci | https://api.github.com/repos/EBISPOT/goci | closed | Scripts to convert VCF to TSV format (GWAS and OTAR) | SumStats Service Type: Task | the openGWAS/MRBase teams have shared >20 cancer GWAS summary statistics in VCF format. Scripts to convert VCFs to TSVs format are needed to process these datasets for integration into the GWAS Catalog and Open Targets Genetics databases.
Example VCF file preanalysis/buniello/ieu-b-94.vcf | 1.0 | Scripts to convert VCF to TSV format (GWAS and OTAR) - the openGWAS/MRBase teams have shared >20 cancer GWAS summary statistics in VCF format. Scripts to convert VCFs to TSVs format are needed to process these datasets for integration into the GWAS Catalog and Open Targets Genetics databases.
Example VCF file preanalysis/buniello/ieu-b-94.vcf | non_priority | scripts to convert vcf to tsv format gwas and otar the opengwas mrbase teams have shared cancer gwas summary statistics in vcf format scripts to convert vcfs to tsvs format are needed to process these datasets for integration into the gwas catalog and open targets genetics databases example vcf file preanalysis buniello ieu b vcf | 0 |
678,001 | 23,182,945,300 | IssuesEvent | 2022-08-01 05:15:10 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Server 500 error with organisation management API. | Priority/Highest Severity/Blocker bug Affected-6.0.0 QA-Reported Organization Management | **Describe the issue:**
Server 500 error with organisation management API. DB : MSSQL
<img width="1040" alt="Screenshot 2022-07-09 at 11 52 50" src="https://user-images.githubusercontent.com/39077751/178094732-174600c2-7ba0-4099-b713-f213f237ae5b.png">
```
2022-07-09 11:47:28,828] [e00a742d-3baf-4010-8924-7227bcf01fad] ERROR {org.wso2.carbon.identity.organization.management.authz.service.handler.OrganizationManagementAuthzHandler} - Error occurred while evaluating authorization of user for organization management.org.wso2.carbon.database.utils.jdbc.exceptions.DataAccessException: Error in performing Database query: '%s' SELECT COUNT(UM_RESOURCE_ID) FROM UM_ORG_PERMISSION WHERE UM_ORG_PERMISSION.UM_ID IN ( SELECT UM_PERMISSION_ID FROM UM_ORG_ROLE_PERMISSION WHERE UM_ORG_ROLE_PERMISSION.UM_ROLE_ID IN ( SELECT UM_ORG_ROLE_USER.UM_ROLE_ID FROM UM_ORG_ROLE_USER LEFT JOIN UM_ORG_ROLE ON UM_ORG_ROLE_USER.UM_ROLE_ID = UM_ORG_ROLE.UM_ROLE_ID WHERE UM_USER_ID = :ID; AND UM_ORG_ID = :NAME;) ) AND UM_RESOURCE_ID IN (:PERMISSION_1;, :PERMISSION_2;, :PERMISSION_3;, :PERMISSION_4;, :PERMISSION_5;, :PERMISSION_6;)
[2022-07-09 11:47:28,972] [e00a742d-3baf-4010-8924-7227bcf01fad] ERROR {org.wso2.carbon.tomcat.ext.valves.CompositeValve} - Could not handle request: /o/10084a8d-113f-4211-a0d5-efe36b082211/api/server/v1/identity-providers java.lang.IllegalStateException: Cannot forward after response has been committed
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:324)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:313)
at org.wso2.carbon.identity.context.rewrite.valve.OrganizationContextRewriteValve.invoke(OrganizationContextRewriteValve.java:129)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:132)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:134)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:152)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:63)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:137)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
```
**How to reproduce:**
1. Get IS pack Build version: https://wso2.org/jenkins/view/products/job/products/job/product-is/org.wso2.is$wso2is/4139/
2. Add the following configuration to <IS-HOME>/repository/conf/deployment.toml
```
[multi_tenancy.stratos]
public_cloud_setup=false
[[organization_context.rewrite]]
base_path = "/api/"
sub_paths = ["/api/server/v1/identity-providers", "/api/server/v1/organizations", "/api/server/v1/applications"]
```
3. Execute “sh bin/enableadaptive.sh” before starting the pack
4. Execute the API for Create organization
```
Request:
curl --location --request POST 'https://localhost:9443/o/10084a8d-113f-4211-a0d5-efe36b082211/api/server/v1/organizations' \
--header 'accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46YWRtaW4=' \
--data-raw '{
"name": "test1",
"description": "this is a construction company",
"type": "TENANT",
"parentId": "ROOT",
"attributes": [
{
"key": "Country",
"value": "Sri Lanka"
},
{
"key": "Language",
"value": "Sinhala"
}
]
}'
```
**Expected behavior:**
Create organization.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: https://wso2.org/jenkins/view/products/job/products/job/product-is/org.wso2.is$wso2is/4139/
- OS: MAC
- Database: MSSQL
- Userstore: JDBC
---
### Optional Fields
**Related issues:**
<!-- Any related issues from this/other repositories-->
**Suggested labels:**
<!-- Only to be used by non-members -->
| 1.0 | Server 500 error with organisation management API. - **Describe the issue:**
Server 500 error with organisation management API. DB : MSSQL
<img width="1040" alt="Screenshot 2022-07-09 at 11 52 50" src="https://user-images.githubusercontent.com/39077751/178094732-174600c2-7ba0-4099-b713-f213f237ae5b.png">
```
2022-07-09 11:47:28,828] [e00a742d-3baf-4010-8924-7227bcf01fad] ERROR {org.wso2.carbon.identity.organization.management.authz.service.handler.OrganizationManagementAuthzHandler} - Error occurred while evaluating authorization of user for organization management.org.wso2.carbon.database.utils.jdbc.exceptions.DataAccessException: Error in performing Database query: '%s' SELECT COUNT(UM_RESOURCE_ID) FROM UM_ORG_PERMISSION WHERE UM_ORG_PERMISSION.UM_ID IN ( SELECT UM_PERMISSION_ID FROM UM_ORG_ROLE_PERMISSION WHERE UM_ORG_ROLE_PERMISSION.UM_ROLE_ID IN ( SELECT UM_ORG_ROLE_USER.UM_ROLE_ID FROM UM_ORG_ROLE_USER LEFT JOIN UM_ORG_ROLE ON UM_ORG_ROLE_USER.UM_ROLE_ID = UM_ORG_ROLE.UM_ROLE_ID WHERE UM_USER_ID = :ID; AND UM_ORG_ID = :NAME;) ) AND UM_RESOURCE_ID IN (:PERMISSION_1;, :PERMISSION_2;, :PERMISSION_3;, :PERMISSION_4;, :PERMISSION_5;, :PERMISSION_6;)
[2022-07-09 11:47:28,972] [e00a742d-3baf-4010-8924-7227bcf01fad] ERROR {org.wso2.carbon.tomcat.ext.valves.CompositeValve} - Could not handle request: /o/10084a8d-113f-4211-a0d5-efe36b082211/api/server/v1/identity-providers java.lang.IllegalStateException: Cannot forward after response has been committed
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:324)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:313)
at org.wso2.carbon.identity.context.rewrite.valve.OrganizationContextRewriteValve.invoke(OrganizationContextRewriteValve.java:129)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:132)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:134)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:152)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:63)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:137)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
```
**How to reproduce:**
1. Get IS pack Build version: https://wso2.org/jenkins/view/products/job/products/job/product-is/org.wso2.is$wso2is/4139/
2. Add the following configuration to <IS-HOME>/repository/conf/deployment.toml
```
[multi_tenancy.stratos]
public_cloud_setup=false
[[organization_context.rewrite]]
base_path = "/api/"
sub_paths = ["/api/server/v1/identity-providers", "/api/server/v1/organizations", "/api/server/v1/applications"]
```
3. Execute “sh bin/enableadaptive.sh” before starting the pack
4. Execute the API for Create organization
```
Request:
curl --location --request POST 'https://localhost:9443/o/10084a8d-113f-4211-a0d5-efe36b082211/api/server/v1/organizations' \
--header 'accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46YWRtaW4=' \
--data-raw '{
"name": "test1",
"description": "this is a construction company",
"type": "TENANT",
"parentId": "ROOT",
"attributes": [
{
"key": "Country",
"value": "Sri Lanka"
},
{
"key": "Language",
"value": "Sinhala"
}
]
}'
```
**Expected behavior:**
Create organization.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: https://wso2.org/jenkins/view/products/job/products/job/product-is/org.wso2.is$wso2is/4139/
- OS: MAC
- Database: MSSQL
- Userstore: JDBC
---
### Optional Fields
**Related issues:**
<!-- Any related issues from this/other repositories-->
**Suggested labels:**
<!-- Only to be used by non-members -->
| priority | server error with organisation management api describe the issue server error with organisation management api db mssql img width alt screenshot at src error org carbon identity organization management authz service handler organizationmanagementauthzhandler error occurred while evaluating authorization of user for organization management org carbon database utils jdbc exceptions dataaccessexception error in performing database query s select count um resource id from um org permission where um org permission um id in select um permission id from um org role permission where um org role permission um role id in select um org role user um role id from um org role user left join um org role on um org role user um role id um org role um role id where um user id id and um org id name and um resource id in permission permission permission permission permission permission error org carbon tomcat ext valves compositevalve could not handle request o api server identity providers java lang illegalstateexception cannot forward after response has been committed at org apache catalina core applicationdispatcher doforward applicationdispatcher java at org apache catalina core applicationdispatcher forward applicationdispatcher java at org carbon identity context rewrite valve organizationcontextrewritevalve invoke organizationcontextrewritevalve java at org carbon tomcat ext valves samesitecookievalve invoke samesitecookievalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java base java lang thread run thread java how to reproduce get is pack build version add the following configuration to repository conf deployment toml public cloud setup false base path api sub paths execute “sh bin enableadaptive sh” before starting the pack execute the api for create organization request curl location request post header accept application json header content type application json header authorization basic data raw name description this is a construction company type tenant parentid root attributes key country value sri lanka key language value sinhala expected behavior create organization environment information please complete the following information remove any unnecessary fields product version os mac database mssql userstore jdbc optional fields related issues suggested labels | 1 |
28,615 | 13,762,096,880 | IssuesEvent | 2020-10-07 08:40:20 | opendatacube/datacube-core | https://api.github.com/repos/opendatacube/datacube-core | closed | Inefficient compositing when loading many tiles | loading data performance wontfix | When combining multiple datasets into one raster time slice non-overlapping datasets still go through "fusing" process. The cost of the "fusing" process is `O(WxHxN)` where `W,H` are the dimensions of the output image and `N` is number of tiles being "fused".
This is particularly noticeable for dense gridded products (various annual summaries) when loading in the native grid spec of a product. Rather than just allocating an output raster and pasting tiles into it, `reproject_and_fuse`:
1. Allocates 2 full-sized rasters of the output size (final result and `temp` for load)
2. For each tile:
a. Load new data into output-raster-sized `temp` (includes filling with `nodata` values)
b. Fuse the entire `temp` raster with result so far (typically allocates output raster sized mask array on every fuse)
Given that `load new data` step knows the extent of valid data that is to be loaded this information can be used to:
1. Reduce memory required for `temp` buffer (size of the tile not size of the output)
2. Reduce cost of filling `temp` with `nodata` (size of the tile not size of the output)
3. Reduce or eliminate cost of fusing
- size of the tile times number of tiles, not size of output times number of tiles
- or if you know that tiles are non-overlapping fusing can be replaced with just pasting
4. Allow construction of mosaics that are larger than 50% of total available memory
| True | Inefficient compositing when loading many tiles - When combining multiple datasets into one raster time slice non-overlapping datasets still go through "fusing" process. The cost of the "fusing" process is `O(WxHxN)` where `W,H` are the dimensions of the output image and `N` is number of tiles being "fused".
This is particularly noticeable for dense gridded products (various annual summaries) when loading in the native grid spec of a product. Rather than just allocating an output raster and pasting tiles into it, `reproject_and_fuse`:
1. Allocates 2 full-sized rasters of the output size (final result and `temp` for load)
2. For each tile:
a. Load new data into output-raster-sized `temp` (includes filling with `nodata` values)
b. Fuse the entire `temp` raster with result so far (typically allocates output raster sized mask array on every fuse)
Given that `load new data` step knows the extent of valid data that is to be loaded this information can be used to:
1. Reduce memory required for `temp` buffer (size of the tile not size of the output)
2. Reduce cost of filling `temp` with `nodata` (size of the tile not size of the output)
3. Reduce or eliminate cost of fusing
- size of the tile times number of tiles, not size of output times number of tiles
- or if you know that tiles are non-overlapping fusing can be replaced with just pasting
4. Allow construction of mosaics that are larger than 50% of total available memory
| non_priority | inefficient compositing when loading many tiles when combining multiple datasets into one raster time slice non overlapping datasets still go through fusing process the cost of the fusing process is o wxhxn where w h are the dimensions of the output image and n is number of tiles being fused this is particularly noticeable for dense gridded products various annual summaries when loading in the native grid spec of a product rather than just allocating an output raster and pasting tiles into it reproject and fuse allocates full sized rasters of the output size final result and temp for load for each tile a load new data into output raster sized temp includes filling with nodata values b fuse the entire temp raster with result so far typically allocates output raster sized mask array on every fuse given that load new data step knows the extent of valid data that is to be loaded this information can be used to reduce memory required for temp buffer size of the tile not size of the output reduce cost of filling temp with nodata size of the tile not size of the output reduce or eliminate cost of fusing size of the tile times number of tiles not size of output times number of tiles or if you know that tiles are non overlapping fusing can be replaced with just pasting allow construction of mosaics that are larger than of total available memory | 0 |
95,738 | 19,758,738,926 | IssuesEvent | 2022-01-16 03:05:09 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Generic math does not work in WASM AOT | arch-wasm area-Codegen-AOT-mono in pr | ### Description
Any code using Generic Math will produce runtime errors in an AOT'd wasm application. It works in the interpreter.
### Reproduction Steps
```csharp
private static T LerpGeneric<T> (T a, T b, T t)
where T : IAdditionOperators<T, T, T>,
ISubtractionOperators<T, T, T>,
IMultiplyOperators<T, T, T>
{
var size = b - a;
var offset = size * t;
return a + offset;
}
[BenchmarkCategory(Categories.Runtime)]
[Benchmark]
public double Lerp_Generic ()
{
double result = 0;
for (int i = 0; i < IterationCount; i++) {
double accumulator = 0;
for (int j = 0; j < ArrayLength; j++) {
double t = j / dArrayLength;
accumulator += LerpGeneric(A[j], B[LengthMinusOne - j], t);
}
result += accumulator;
}
return result;
}
```
### Expected behavior
Working generic math (even if it falls back to the interpreter)
### Actual behavior
```
console.error: RuntimeError: null function or function signature mismatch
console.error: RuntimeError: null function or function signature mismatch
at MicroBenchmarks_GenericMathTest_Lerp_Generic (wasm://wasm/2627c6ae:wasm-function[290160]:0x4b57467)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_wrapper_delegate_invoke__Module_invoke_double (wasm://wasm/2627c6ae:wasm-function[138969]:0x20df192)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_BenchmarkDotNet_Autogenerated_Runnable_3_WorkloadActionNoUnroll_long (wasm://wasm/2627c6ae:wasm-function[139003]:0x20e20d5)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_wrapper_delegate_invoke_System_Action_1_long_invoke_void_T_long (wasm://wasm/2627c6ae:wasm-function[138974]:0x20dfe10)
at BenchmarkDotNet_BenchmarkDotNet_Engines_Engine_RunIteration_BenchmarkDotNet_Engines_IterationData (wasm://wasm/2627c6ae:wasm-function[314273]:0x5258a20)
at BenchmarkDotNet_BenchmarkDotNet_Engines_EngineFactory_Jit_BenchmarkDotNet_Engines_Engine_int_int_int (wasm://wasm/2627c6ae:wasm-function[14040]:0x4edbcb)
at BenchmarkDotNet_BenchmarkDotNet_Engines_EngineFactory_CreateReadyToRun_BenchmarkDotNet_Engines_EngineParameters (wasm://wasm/2627c6ae:wasm-function[314292]:0x5259ef8)
at BenchmarkDotNet_Diagnostics_Windows_aot_wrapper_gsharedvt_out_sig_obj_this_obj (wasm://wasm/2627c6ae:wasm-function[922]:0x18c201)
at jit_call_cb (wasm://wasm/2627c6ae:wasm-function[36553]:0xbe0d89)
at invoke_vi (dotnet.js:1:234708)
```
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
latest ```main```
### Other information
_No response_ | 1.0 | Generic math does not work in WASM AOT - ### Description
Any code using Generic Math will produce runtime errors in an AOT'd wasm application. It works in the interpreter.
### Reproduction Steps
```csharp
private static T LerpGeneric<T> (T a, T b, T t)
where T : IAdditionOperators<T, T, T>,
ISubtractionOperators<T, T, T>,
IMultiplyOperators<T, T, T>
{
var size = b - a;
var offset = size * t;
return a + offset;
}
[BenchmarkCategory(Categories.Runtime)]
[Benchmark]
public double Lerp_Generic ()
{
double result = 0;
for (int i = 0; i < IterationCount; i++) {
double accumulator = 0;
for (int j = 0; j < ArrayLength; j++) {
double t = j / dArrayLength;
accumulator += LerpGeneric(A[j], B[LengthMinusOne - j], t);
}
result += accumulator;
}
return result;
}
```
### Expected behavior
Working generic math (even if it falls back to the interpreter)
### Actual behavior
```
console.error: RuntimeError: null function or function signature mismatch
console.error: RuntimeError: null function or function signature mismatch
at MicroBenchmarks_GenericMathTest_Lerp_Generic (wasm://wasm/2627c6ae:wasm-function[290160]:0x4b57467)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_wrapper_delegate_invoke__Module_invoke_double (wasm://wasm/2627c6ae:wasm-function[138969]:0x20df192)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_BenchmarkDotNet_Autogenerated_Runnable_3_WorkloadActionNoUnroll_long (wasm://wasm/2627c6ae:wasm-function[139003]:0x20e20d5)
at b8f19783_ecdd_483a_a5be_f2cfb1051f26_wrapper_delegate_invoke_System_Action_1_long_invoke_void_T_long (wasm://wasm/2627c6ae:wasm-function[138974]:0x20dfe10)
at BenchmarkDotNet_BenchmarkDotNet_Engines_Engine_RunIteration_BenchmarkDotNet_Engines_IterationData (wasm://wasm/2627c6ae:wasm-function[314273]:0x5258a20)
at BenchmarkDotNet_BenchmarkDotNet_Engines_EngineFactory_Jit_BenchmarkDotNet_Engines_Engine_int_int_int (wasm://wasm/2627c6ae:wasm-function[14040]:0x4edbcb)
at BenchmarkDotNet_BenchmarkDotNet_Engines_EngineFactory_CreateReadyToRun_BenchmarkDotNet_Engines_EngineParameters (wasm://wasm/2627c6ae:wasm-function[314292]:0x5259ef8)
at BenchmarkDotNet_Diagnostics_Windows_aot_wrapper_gsharedvt_out_sig_obj_this_obj (wasm://wasm/2627c6ae:wasm-function[922]:0x18c201)
at jit_call_cb (wasm://wasm/2627c6ae:wasm-function[36553]:0xbe0d89)
at invoke_vi (dotnet.js:1:234708)
```
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
latest ```main```
### Other information
_No response_ | non_priority | generic math does not work in wasm aot description any code using generic math will produce runtime errors in an aot d wasm application it works in the interpreter reproduction steps csharp private static t lerpgeneric t a t b t t where t iadditionoperators isubtractionoperators imultiplyoperators var size b a var offset size t return a offset public double lerp generic double result for int i i iterationcount i double accumulator for int j j arraylength j double t j darraylength accumulator lerpgeneric a b t result accumulator return result expected behavior working generic math even if it falls back to the interpreter actual behavior console error runtimeerror null function or function signature mismatch console error runtimeerror null function or function signature mismatch at microbenchmarks genericmathtest lerp generic wasm wasm wasm function at ecdd wrapper delegate invoke module invoke double wasm wasm wasm function at ecdd benchmarkdotnet autogenerated runnable workloadactionnounroll long wasm wasm wasm function at ecdd wrapper delegate invoke system action long invoke void t long wasm wasm wasm function at benchmarkdotnet benchmarkdotnet engines engine runiteration benchmarkdotnet engines iterationdata wasm wasm wasm function at benchmarkdotnet benchmarkdotnet engines enginefactory jit benchmarkdotnet engines engine int int int wasm wasm wasm function at benchmarkdotnet benchmarkdotnet engines enginefactory createreadytorun benchmarkdotnet engines engineparameters wasm wasm wasm function at benchmarkdotnet diagnostics windows aot wrapper gsharedvt out sig obj this obj wasm wasm wasm function at jit call cb wasm wasm wasm function at invoke vi dotnet js regression no response known workarounds no response configuration latest main other information no response | 0 |
829,886 | 31,927,934,364 | IssuesEvent | 2023-09-19 04:25:42 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | closed | [Bug]: 通过amis的单页跳转进入iframe选项卡时,参数中的https://会被转化为https:/ | bug priority: High won't fix | ### Description


### Steps To Reproduce 重现步骤
1.新建iframe选项卡
2.复制iframe选项卡的正确路径,并放到amis中的单页跳转内,比如button
3.点击跳转button
### Version 版本
2.5 | 1.0 | [Bug]: 通过amis的单页跳转进入iframe选项卡时,参数中的https://会被转化为https:/ - ### Description


### Steps To Reproduce 重现步骤
1.新建iframe选项卡
2.复制iframe选项卡的正确路径,并放到amis中的单页跳转内,比如button
3.点击跳转button
### Version 版本
2.5 | priority | 通过amis的单页跳转进入iframe选项卡时,参数中的 description steps to reproduce 重现步骤 新建iframe选项卡 复制iframe选项卡的正确路径,并放到amis中的单页跳转内,比如button 点击跳转button version 版本 | 1 |
359,216 | 10,666,951,877 | IssuesEvent | 2019-10-19 08:27:09 | YunoHost-Apps/shaarli_ynh | https://api.github.com/repos/YunoHost-Apps/shaarli_ynh | closed | "/var/www/shaarli/data/log.txt" is actualy a directoy for unknow reason :| | bug help needed high priority | Hello,
I'm helping someone debug a broken yunohost apps upgrade and for a totally weird and unknown reason the "file" "/var/www/shaarli/data/log.txt" turned out the actually be a ... directory :|
The problem is that this breaks `fail2ban` which wants to parse this file and complains because it is a directory :x
Also "/var/www/shaarli/data/log.txt" is owned by root for some reason? The other files are own by "shaarli".
After a quick look in the code I really have no idea on how it has append but here is the full upgrade logs https://paste.yunohost.org/raw/dapixodeni | 1.0 | "/var/www/shaarli/data/log.txt" is actualy a directoy for unknow reason :| - Hello,
I'm helping someone debug a broken yunohost apps upgrade and for a totally weird and unknown reason the "file" "/var/www/shaarli/data/log.txt" turned out the actually be a ... directory :|
The problem is that this breaks `fail2ban` which wants to parse this file and complains because it is a directory :x
Also "/var/www/shaarli/data/log.txt" is owned by root for some reason? The other files are own by "shaarli".
After a quick look in the code I really have no idea on how it has append but here is the full upgrade logs https://paste.yunohost.org/raw/dapixodeni | priority | var www shaarli data log txt is actualy a directoy for unknow reason hello i m helping someone debug a broken yunohost apps upgrade and for a totally weird and unknown reason the file var www shaarli data log txt turned out the actually be a directory the problem is that this breaks which wants to parse this file and complains because it is a directory x also var www shaarli data log txt is owned by root for some reason the other files are own by shaarli after a quick look in the code i really have no idea on how it has append but here is the full upgrade logs | 1 |
818,667 | 30,698,918,904 | IssuesEvent | 2023-07-26 21:03:23 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | closed | When user loads order confirmation page after initial registration ticket info and price often disappear | bug Priority: High | When user loads order confirmation page after initial registration ticket info and price often disappear or the line below for Ticket Type, Price, and Subtotal disappears or shows the wrong amount.

| 1.0 | When user loads order confirmation page after initial registration ticket info and price often disappear - When user loads order confirmation page after initial registration ticket info and price often disappear or the line below for Ticket Type, Price, and Subtotal disappears or shows the wrong amount.

| priority | when user loads order confirmation page after initial registration ticket info and price often disappear when user loads order confirmation page after initial registration ticket info and price often disappear or the line below for ticket type price and subtotal disappears or shows the wrong amount | 1 |
323,478 | 9,855,649,257 | IssuesEvent | 2019-06-19 19:57:44 | stencila/encoda | https://api.github.com/repos/stencila/encoda | closed | Puppeteer: Fix browser setup and teardown | priority: high | We have a `puppeteer.ts` module which provides an interface to Puppeteer. I recently added this so that only one browser instance is lazily instantiated by consuming codecs (e.g `pdf` and `rpng`).
I ran into issues with the browser not closing and so, rather hurridly, added:
https://github.com/stencila/encoda/blob/master/tests/teardown.ts#L1
and
https://github.com/stencila/encoda/blob/master/src/cli.ts#L51
Neither are desirable, and the former makes the `jest --watch` exit prematurely.
I wonder if the best approach is to have one *one* browser instance (i.e. not have one per consuming codec module).
| 1.0 | Puppeteer: Fix browser setup and teardown - We have a `puppeteer.ts` module which provides an interface to Puppeteer. I recently added this so that only one browser instance is lazily instantiated by consuming codecs (e.g `pdf` and `rpng`).
I ran into issues with the browser not closing and so, rather hurridly, added:
https://github.com/stencila/encoda/blob/master/tests/teardown.ts#L1
and
https://github.com/stencila/encoda/blob/master/src/cli.ts#L51
Neither are desirable, and the former makes the `jest --watch` exit prematurely.
I wonder if the best approach is to have one *one* browser instance (i.e. not have one per consuming codec module).
| priority | puppeteer fix browser setup and teardown we have a puppeteer ts module which provides an interface to puppeteer i recently added this so that only one browser instance is lazily instantiated by consuming codecs e g pdf and rpng i ran into issues with the browser not closing and so rather hurridly added and neither are desirable and the former makes the jest watch exit prematurely i wonder if the best approach is to have one one browser instance i e not have one per consuming codec module | 1 |
671,795 | 22,776,363,244 | IssuesEvent | 2022-07-08 14:49:12 | hovgaardgames/bigambitions | https://api.github.com/repos/hovgaardgames/bigambitions | closed | Restaurant message "is out of burgers" disappears the moment I enter the store with boxes of burgers on hand truck | confirmed low-priority | ### Build number
731
### Bug description
The moment I entered the fast food restaurant with a hand truck full of burgers, the message "is out of burgers" disappeared, although the grill was still empty and not yet refilled.
### Steps to reproduce the bug
_No response_
### Savegame file
[Burgerless.json.txt](https://github.com/hovgaardgames/bigambitions/files/8614529/Burgerless.json.txt)
### Screenshots or videos

| 1.0 | Restaurant message "is out of burgers" disappears the moment I enter the store with boxes of burgers on hand truck - ### Build number
731
### Bug description
The moment I entered the fast food restaurant with a hand truck full of burgers, the message "is out of burgers" disappeared, although the grill was still empty and not yet refilled.
### Steps to reproduce the bug
_No response_
### Savegame file
[Burgerless.json.txt](https://github.com/hovgaardgames/bigambitions/files/8614529/Burgerless.json.txt)
### Screenshots or videos

| priority | restaurant message is out of burgers disappears the moment i enter the store with boxes of burgers on hand truck build number bug description the moment i entered the fast food restaurant with a hand truck full of burgers the message is out of burgers disappeared although the grill was still empty and not yet refilled steps to reproduce the bug no response savegame file screenshots or videos | 1 |
635,550 | 20,405,722,960 | IssuesEvent | 2022-02-23 05:01:30 | hengband/hengband | https://api.github.com/repos/hengband/hengband | closed | WeaponEnchanterの改善 | refactor Priority:MIDDLE | 以下はapply-magic.cpp の抜粋
WeaponEnchanterのコンストラクタを呼ぶかどうかを判定している
しかしこれは設計が外に出てしまっていて弱い:
```cpp
case ItemKindType::BOLT:
if (power != 0) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
case ItemKindType::POLEARM:
if ((power != 0) && (o_ptr->sval != SV_DEATH_SCYTHE)) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
case ItemKindType::SWORD:
if ((power != 0) && (o_ptr->sval != SV_POISON_NEEDLE)) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
```
これを、基底クラスであるAbstractWeaponEnchanter のコンストラクタで強化/弱化対象かを判定させることとする
今までと違って必ずコンストラクタは呼ばれるが、大した負荷増にはならないと想定される | 1.0 | WeaponEnchanterの改善 - 以下はapply-magic.cpp の抜粋
WeaponEnchanterのコンストラクタを呼ぶかどうかを判定している
しかしこれは設計が外に出てしまっていて弱い:
```cpp
case ItemKindType::BOLT:
if (power != 0) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
case ItemKindType::POLEARM:
if ((power != 0) && (o_ptr->sval != SV_DEATH_SCYTHE)) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
case ItemKindType::SWORD:
if ((power != 0) && (o_ptr->sval != SV_POISON_NEEDLE)) {
WeaponEnchanter(player_ptr, o_ptr, lev, power).apply_magic();
}
break;
```
これを、基底クラスであるAbstractWeaponEnchanter のコンストラクタで強化/弱化対象かを判定させることとする
今までと違って必ずコンストラクタは呼ばれるが、大した負荷増にはならないと想定される | priority | weaponenchanterの改善 以下はapply magic cpp の抜粋 weaponenchanterのコンストラクタを呼ぶかどうかを判定している しかしこれは設計が外に出てしまっていて弱い: cpp case itemkindtype bolt if power weaponenchanter player ptr o ptr lev power apply magic break case itemkindtype polearm if power o ptr sval sv death scythe weaponenchanter player ptr o ptr lev power apply magic break case itemkindtype sword if power o ptr sval sv poison needle weaponenchanter player ptr o ptr lev power apply magic break これを、基底クラスであるabstractweaponenchanter のコンストラクタで強化 弱化対象かを判定させることとする 今までと違って必ずコンストラクタは呼ばれるが、大した負荷増にはならないと想定される | 1 |
690,871 | 23,675,558,743 | IssuesEvent | 2022-08-28 02:40:39 | apache/hudi | https://api.github.com/repos/apache/hudi | closed | [SUPPORT] Duplicates appears on the some attempts of rewrite | aws-support priority:major writer-core | Dataset has two stages: the initial upload from a snapshot (insert operations) and after that updates happens on demand from Kafka (upserts).
I checked the snapshot, it does not contain any duplicates. But on the second stage some duplicates appear. In case of duplicates dataset has 2 records with the same key in the same partition (files are different). The first record is from snapshot load, se second one is upserted from Kafka. It looks like upserts do not overwrite data from the snapshot in some cases. There is no such problem for small datasets, it appears on a big one.
The number of duplicates is not big: ~1000 for ~100000 upserted records.
Options roughly are
```
DataSourceWriteOptions.TABLE_TYPE -> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL,
DataSourceWriteOptions.PRECOMBINE_FIELD -> "internal_ts",
FileSystemViewStorageConfig.INCREMENTAL_TIMELINE_SYNC_ENABLE -> false,
DataSourceWriteOptions.HIVE_STYLE_PARTITIONING -> true,
HoodieCompactionConfig.CLEANER_INCREMENTAL_MODE_ENABLE -> true,
HoodieCompactionConfig.CLEANER_POLICY -> HoodieCleaningPolicy.KEEP_LATEST_FILE_VERSIONS,
HoodieCompactionConfig.CLEANER_FILE_VERSIONS_RETAINED -> 3,
DataSourceWriteOptions.ASYNC_COMPACT_ENABLE -> false,
HoodieCompactionConfig.INLINE_COMPACT -> true,
HoodiePayloadConfig.EVENT_TIME_FIELD -> "internal_ts",
HoodiePayloadConfig.ORDERING_FIELD -> "internal_ts",
DataSourceWriteOptions.PAYLOAD_CLASS_NAME -> "org.apache.hudi.common.model.EventTimeAvroPayload"
```
Data sample as CSV:
```
_hoodie_commit_time,_hoodie_commit_seqno,_hoodie_record_key,_hoodie_partition_path,_hoodie_file_name,internal_ts,event_type
20220524104142181,20220524104142181_2_5302363,202713158,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_7,202713158,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653177858000,1
20220524104142181,20220524104142181_2_5301697,202720884,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_5,202720884,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653182904000,1
20220524104142181,20220524104142181_2_5301713,202725262,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_4,202725262,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653185666000,1
20220524104142181,20220524104142181_2_5301843,202732411,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_6,202732411,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653190648000,1
20220524104142181,20220524104142181_2_5301968,202743505,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_3,202743505,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653198094000,1
20220524104142181,20220524104142181_2_5302039,202761336,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_2,202761336,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653210043000,1
20220524104142181,20220524104142181_7_5217271,202986883,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13514,202986883,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350461000,1
20220524104142181,20220524104142181_7_5217354,202987578,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13380,202987578,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350881000,1
20220524104142181,20220524104142181_7_5217375,202987648,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13589,202987648,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350882000,1
20220524104142181,20220524104142181_7_5217449,202988003,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13221,202988003,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653351062000,1
20220524104142181,20220524104142181_7_5217496,202988323,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13425,202988323,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653351242000,1
```
In this sample of data event_type with value 0 corresponds to inserted values and 1 for upserted ones. The field "internal_ts" is used as an ordering field.
How could I solve this issue? Hudi version is 0.10.1, environment is AWS Glue | 1.0 | [SUPPORT] Duplicates appears on the some attempts of rewrite - Dataset has two stages: the initial upload from a snapshot (insert operations) and after that updates happens on demand from Kafka (upserts).
I checked the snapshot, it does not contain any duplicates. But on the second stage some duplicates appear. In case of duplicates dataset has 2 records with the same key in the same partition (files are different). The first record is from snapshot load, se second one is upserted from Kafka. It looks like upserts do not overwrite data from the snapshot in some cases. There is no such problem for small datasets, it appears on a big one.
The number of duplicates is not big: ~1000 for ~100000 upserted records.
Options roughly are
```
DataSourceWriteOptions.TABLE_TYPE -> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL,
DataSourceWriteOptions.PRECOMBINE_FIELD -> "internal_ts",
FileSystemViewStorageConfig.INCREMENTAL_TIMELINE_SYNC_ENABLE -> false,
DataSourceWriteOptions.HIVE_STYLE_PARTITIONING -> true,
HoodieCompactionConfig.CLEANER_INCREMENTAL_MODE_ENABLE -> true,
HoodieCompactionConfig.CLEANER_POLICY -> HoodieCleaningPolicy.KEEP_LATEST_FILE_VERSIONS,
HoodieCompactionConfig.CLEANER_FILE_VERSIONS_RETAINED -> 3,
DataSourceWriteOptions.ASYNC_COMPACT_ENABLE -> false,
HoodieCompactionConfig.INLINE_COMPACT -> true,
HoodiePayloadConfig.EVENT_TIME_FIELD -> "internal_ts",
HoodiePayloadConfig.ORDERING_FIELD -> "internal_ts",
DataSourceWriteOptions.PAYLOAD_CLASS_NAME -> "org.apache.hudi.common.model.EventTimeAvroPayload"
```
Data sample as CSV:
```
_hoodie_commit_time,_hoodie_commit_seqno,_hoodie_record_key,_hoodie_partition_path,_hoodie_file_name,internal_ts,event_type
20220524104142181,20220524104142181_2_5302363,202713158,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_7,202713158,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653177858000,1
20220524104142181,20220524104142181_2_5301697,202720884,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_5,202720884,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653182904000,1
20220524104142181,20220524104142181_2_5301713,202725262,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_4,202725262,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653185666000,1
20220524104142181,20220524104142181_2_5301843,202732411,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_6,202732411,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653190648000,1
20220524104142181,20220524104142181_2_5301968,202743505,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_3,202743505,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653198094000,1
20220524104142181,20220524104142181_2_5302039,202761336,date=2021-05-22,eafb523b-6e06-467f-aa73-59f2a6f1b0a7-0_2-4468-297519_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_5_2,202761336,date=2021-05-22,4c6c650e-ed42-4ce2-a663-5f84ed919bd4-0_5-46-1312_20220524110134548.parquet,1653210043000,1
20220524104142181,20220524104142181_7_5217271,202986883,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13514,202986883,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350461000,1
20220524104142181,20220524104142181_7_5217354,202987578,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13380,202987578,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350881000,1
20220524104142181,20220524104142181_7_5217375,202987648,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13589,202987648,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653350882000,1
20220524104142181,20220524104142181_7_5217449,202988003,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13221,202988003,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653351062000,1
20220524104142181,20220524104142181_7_5217496,202988323,date=2021-05-24,ecca9f33-8691-4d85-b56b-5e5bfcf7d6a9-0_7-4470-297537_20220524104142181.parquet,0,0
20220524110134548,20220524110134548_29_13425,202988323,date=2021-05-24,4a5d1ec9-f67f-4db2-aa2d-3c169f35450c-0_29-52-1335_20220524110134548.parquet,1653351242000,1
```
In this sample of data event_type with value 0 corresponds to inserted values and 1 for upserted ones. The field "internal_ts" is used as an ordering field.
How could I solve this issue? Hudi version is 0.10.1, environment is AWS Glue | priority | duplicates appears on the some attempts of rewrite dataset has two stages the initial upload from a snapshot insert operations and after that updates happens on demand from kafka upserts i checked the snapshot it does not contain any duplicates but on the second stage some duplicates appear in case of duplicates dataset has records with the same key in the same partition files are different the first record is from snapshot load se second one is upserted from kafka it looks like upserts do not overwrite data from the snapshot in some cases there is no such problem for small datasets it appears on a big one the number of duplicates is not big for upserted records options roughly are datasourcewriteoptions table type datasourcewriteoptions mor table type opt val datasourcewriteoptions precombine field internal ts filesystemviewstorageconfig incremental timeline sync enable false datasourcewriteoptions hive style partitioning true hoodiecompactionconfig cleaner incremental mode enable true hoodiecompactionconfig cleaner policy hoodiecleaningpolicy keep latest file versions hoodiecompactionconfig cleaner file versions retained datasourcewriteoptions async compact enable false hoodiecompactionconfig inline compact true hoodiepayloadconfig event time field internal ts hoodiepayloadconfig ordering field internal ts datasourcewriteoptions payload class name org apache hudi common model eventtimeavropayload data sample as csv hoodie commit time hoodie commit seqno hoodie record key hoodie partition path hoodie file name internal ts event type date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet date parquet in this sample of data event type with value corresponds to inserted values and for upserted ones the field internal ts is used as an ordering field how could i solve this issue hudi version is environment is aws glue | 1 |
125,522 | 12,262,282,783 | IssuesEvent | 2020-05-06 21:45:36 | csinn/Painted-Prosthetics | https://api.github.com/repos/csinn/Painted-Prosthetics | opened | Create website wireframe | documentation wiki | - [ ] Drawing window
- [ ] Info & Contacts section
- [ ] Drawing submission window | 1.0 | Create website wireframe - - [ ] Drawing window
- [ ] Info & Contacts section
- [ ] Drawing submission window | non_priority | create website wireframe drawing window info contacts section drawing submission window | 0 |
441,124 | 12,708,407,297 | IssuesEvent | 2020-06-23 10:29:30 | wso2/micro-integrator | https://api.github.com/repos/wso2/micro-integrator | closed | Add possible configurations in the deployment toml | Priority/Highest Type/Improvement | **Description:**
Add possible configurations in the deployment toml file as commented lines, so that the user can uncomment and use them easily.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
MI 1.2.0
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | Add possible configurations in the deployment toml - **Description:**
Add possible configurations in the deployment toml file as commented lines, so that the user can uncomment and use them easily.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
MI 1.2.0
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | add possible configurations in the deployment toml description add possible configurations in the deployment toml file as commented lines so that the user can uncomment and use them easily suggested labels suggested assignees affected product version mi os db other environment details and versions steps to reproduce related issues | 1 |
16,518 | 2,615,117,960 | IssuesEvent | 2015-03-01 05:43:28 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | health-v2-atom-oauth-sample | auto-migrated Priority-Medium Type-Sample | ```
Which API and version (e.g. Google Calendar Data API version 2)?
Google Health Data API version 2
What format (e.g. JSON, Atom)?
JSON
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
ClientLogin
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `Mehmet.A...@gmail.com` on 21 Dec 2010 at 7:34 | 1.0 | health-v2-atom-oauth-sample - ```
Which API and version (e.g. Google Calendar Data API version 2)?
Google Health Data API version 2
What format (e.g. JSON, Atom)?
JSON
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
ClientLogin
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `Mehmet.A...@gmail.com` on 21 Dec 2010 at 7:34 | priority | health atom oauth sample which api and version e g google calendar data api version google health data api version what format e g json atom json what authentation e g oauth oauth android clientlogin clientlogin java environment e g java android app engine java external references such as api reference guide please provide any additional information below original issue reported on code google com by mehmet a gmail com on dec at | 1 |
719,279 | 24,754,354,643 | IssuesEvent | 2022-10-21 16:14:30 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Kubectl - Introduce "custom-columns" variant to add additional columns to output | priority/backlog kind/feature sig/cli lifecycle/rotten triage/accepted | I have re-opened https://github.com/kubernetes/kubernetes/issues/71612, as the issue had 40 +1s, I think its really useful, and I wasn't able to get it re-opened.
Currently, kubectl get supports --custom-columns as one of its output format flags. The behavior of this flag allows a user to specify a comma-delimited list of COLUMN_NAME mapped to a jsonpath value: COLUMN:.metadata.name.
The problem with the current behavior is that only the columns that are specified are shown in the final output printed to the screen. In the example above, only the following information would be presented to the user:
```
$ kubectl get pod my-pod --custom-columns=COLUMN_NAME=.metadata.name
COLUMN_NAME
my-pod
```
This behavior can be enhanced by introducing a new variant of the --custom-columns flag that would add the user-provided column-values to the normal human-readable output of the command:
```
$ kubectl get pod my-pod --extra-columns=CUSTOM_COLUMN=.metadata.name
NAME READY STATUS RESTARTS AGE CUSTOM_COLUMN
my-pod 1/1 Running 0 28s my-pod
```
/kind feature
/sig cli | 1.0 | Kubectl - Introduce "custom-columns" variant to add additional columns to output - I have re-opened https://github.com/kubernetes/kubernetes/issues/71612, as the issue had 40 +1s, I think its really useful, and I wasn't able to get it re-opened.
Currently, kubectl get supports --custom-columns as one of its output format flags. The behavior of this flag allows a user to specify a comma-delimited list of COLUMN_NAME mapped to a jsonpath value: COLUMN:.metadata.name.
The problem with the current behavior is that only the columns that are specified are shown in the final output printed to the screen. In the example above, only the following information would be presented to the user:
```
$ kubectl get pod my-pod --custom-columns=COLUMN_NAME=.metadata.name
COLUMN_NAME
my-pod
```
This behavior can be enhanced by introducing a new variant of the --custom-columns flag that would add the user-provided column-values to the normal human-readable output of the command:
```
$ kubectl get pod my-pod --extra-columns=CUSTOM_COLUMN=.metadata.name
NAME READY STATUS RESTARTS AGE CUSTOM_COLUMN
my-pod 1/1 Running 0 28s my-pod
```
/kind feature
/sig cli | priority | kubectl introduce custom columns variant to add additional columns to output i have re opened as the issue had i think its really useful and i wasn t able to get it re opened currently kubectl get supports custom columns as one of its output format flags the behavior of this flag allows a user to specify a comma delimited list of column name mapped to a jsonpath value column metadata name the problem with the current behavior is that only the columns that are specified are shown in the final output printed to the screen in the example above only the following information would be presented to the user kubectl get pod my pod custom columns column name metadata name column name my pod this behavior can be enhanced by introducing a new variant of the custom columns flag that would add the user provided column values to the normal human readable output of the command kubectl get pod my pod extra columns custom column metadata name name ready status restarts age custom column my pod running my pod kind feature sig cli | 1 |
299,839 | 25,930,362,923 | IssuesEvent | 2022-12-16 09:30:04 | trinodb/trino | https://api.github.com/repos/trinodb/trino | opened | Flaky Pinot test setup: ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [49193, 49194, 49195] should be listening) | bug test | ```
Error: Tests run: 316, Failures: 1, Errors: 0, Skipped: 114, Time elapsed: 885.381 s <<< FAILURE! - in TestSuite
Error: io.trino.plugin.pinot.TestPinotWithoutAuthenticationIntegrationLatestVersionConnectorSmokeTest.init Time elapsed: 9.508 s <<< FAILURE!
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
at io.trino.plugin.pinot.TestingPinotCluster.start(TestingPinotCluster.java:142)
at io.trino.plugin.pinot.BasePinotIntegrationConnectorSmokeTest.createQueryRunner(BasePinotIntegrationConnectorSmokeTest.java:155)
at io.trino.testing.AbstractTestQueryFramework.init(AbstractTestQueryFramework.java:102)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:515)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:217)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:144)
at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:169)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
... 17 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:542)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:344)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 18 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [49193, 49194, 49195] should be listening)
at org.testcontainers.containers.wait.strategy.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:102)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:52)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:953)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:485)
``` | 1.0 | Flaky Pinot test setup: ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [49193, 49194, 49195] should be listening) - ```
Error: Tests run: 316, Failures: 1, Errors: 0, Skipped: 114, Time elapsed: 885.381 s <<< FAILURE! - in TestSuite
Error: io.trino.plugin.pinot.TestPinotWithoutAuthenticationIntegrationLatestVersionConnectorSmokeTest.init Time elapsed: 9.508 s <<< FAILURE!
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
at io.trino.plugin.pinot.TestingPinotCluster.start(TestingPinotCluster.java:142)
at io.trino.plugin.pinot.BasePinotIntegrationConnectorSmokeTest.createQueryRunner(BasePinotIntegrationConnectorSmokeTest.java:155)
at io.trino.testing.AbstractTestQueryFramework.init(AbstractTestQueryFramework.java:102)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:515)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:217)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:144)
at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:169)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
... 17 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:542)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:344)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 18 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [49193, 49194, 49195] should be listening)
at org.testcontainers.containers.wait.strategy.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:102)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:52)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:953)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:485)
``` | non_priority | flaky pinot test setup containerlaunchexception timed out waiting for container port to open localhost ports should be listening error tests run failures errors skipped time elapsed s failure in testsuite error io trino plugin pinot testpinotwithoutauthenticationintegrationlatestversionconnectorsmoketest init time elapsed s failure org testcontainers containers containerlaunchexception container startup failed at org testcontainers containers genericcontainer dostart genericcontainer java at org testcontainers containers genericcontainer start genericcontainer java at io trino plugin pinot testingpinotcluster start testingpinotcluster java at io trino plugin pinot basepinotintegrationconnectorsmoketest createqueryrunner basepinotintegrationconnectorsmoketest java at io trino testing abstracttestqueryframework init abstracttestqueryframework java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokeconfigurationmethod invoker java at org testng internal invoker invokeconfigurations invoker java at org testng internal invoker invokeconfigurations invoker java at org testng internal testmethodworker invokebeforeclassmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by org rnorth ducttape retrycountexceededexception retry limit hit with exception at org rnorth ducttape unreliables unreliables retryuntilsuccess unreliables java at org testcontainers containers genericcontainer dostart genericcontainer java more caused by org testcontainers containers containerlaunchexception could not create start container at org testcontainers containers genericcontainer trystart genericcontainer java at org testcontainers containers genericcontainer lambda dostart genericcontainer java at org rnorth ducttape unreliables unreliables retryuntilsuccess unreliables java more caused by org testcontainers containers containerlaunchexception timed out waiting for container port to open localhost ports should be listening at org testcontainers containers wait strategy hostportwaitstrategy waituntilready hostportwaitstrategy java at org testcontainers containers wait strategy abstractwaitstrategy waituntilready abstractwaitstrategy java at org testcontainers containers genericcontainer waituntilcontainerstarted genericcontainer java at org testcontainers containers genericcontainer trystart genericcontainer java | 0 |
44,675 | 5,639,226,268 | IssuesEvent | 2017-04-06 13:54:41 | pods-framework/pods | https://api.github.com/repos/pods-framework/pods | closed | Ehancement: Activate Pods Templates on Pods Activation for User Experience | Enhancement Fixed / Needs Testing in progress Needs Developer Feedback | PR: #4060
Ehancement: Activate Pods Templates on Pods Activation for User Experience submitted via Slack by jimtrue
It's fairly easy to miss this for new & beginning users of Pods. This would be a quality of life improvement. If a developer or anyone else didn't need Auto Templates or Pods Templates, they could very easily disable this component.
| 1.0 | Ehancement: Activate Pods Templates on Pods Activation for User Experience - PR: #4060
Ehancement: Activate Pods Templates on Pods Activation for User Experience submitted via Slack by jimtrue
It's fairly easy to miss this for new & beginning users of Pods. This would be a quality of life improvement. If a developer or anyone else didn't need Auto Templates or Pods Templates, they could very easily disable this component.
| non_priority | ehancement activate pods templates on pods activation for user experience pr ehancement activate pods templates on pods activation for user experience submitted via slack by jimtrue it s fairly easy to miss this for new beginning users of pods this would be a quality of life improvement if a developer or anyone else didn t need auto templates or pods templates they could very easily disable this component | 0 |
50,343 | 10,478,141,946 | IssuesEvent | 2019-09-23 22:54:40 | aspnet/AspNetCore | https://api.github.com/repos/aspnet/AspNetCore | closed | Update to latest CSharp Language Server package | 3 - Done area-mvc cost: S enhancement feature-razor.vscode | There's a few [breaking changes](https://github.com/OmniSharp/csharp-language-server-protocol/pull/128/files/6db48fc44fa0ae1fa45b4c8c86239c81c42ad82a..9e468e067cc3f71677dd920d3b862675c1f1a1ea) that were recently merged. We'll need to react. | 1.0 | Update to latest CSharp Language Server package - There's a few [breaking changes](https://github.com/OmniSharp/csharp-language-server-protocol/pull/128/files/6db48fc44fa0ae1fa45b4c8c86239c81c42ad82a..9e468e067cc3f71677dd920d3b862675c1f1a1ea) that were recently merged. We'll need to react. | non_priority | update to latest csharp language server package there s a few that were recently merged we ll need to react | 0 |
40,478 | 2,868,922,046 | IssuesEvent | 2015-06-05 21:58:47 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Make sure docs for publishing a package mention that deleting is discouraged | enhancement Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5093_
----
When pub supports user upload of packages, we want to make sure they understand uploading is (generally) forever. The docs for the command should clarify this. | 1.0 | Make sure docs for publishing a package mention that deleting is discouraged - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5093_
----
When pub supports user upload of packages, we want to make sure they understand uploading is (generally) forever. The docs for the command should clarify this. | priority | make sure docs for publishing a package mention that deleting is discouraged issue by originally opened as dart lang sdk when pub supports user upload of packages we want to make sure they understand uploading is generally forever the docs for the command should clarify this | 1 |
115,357 | 17,313,717,568 | IssuesEvent | 2021-07-27 01:01:38 | shaimael/keycloak | https://api.github.com/repos/shaimael/keycloak | opened | CVE-2021-35515 (Medium) detected in commons-compress-1.18.jar | security vulnerability | ## CVE-2021-35515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.18.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: keycloak/examples/providers/authenticator/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar</p>
<p>
Dependency Hierarchy:
- openshift-restclient-java-8.0.0.Final.jar (Root Library)
- :x: **commons-compress-1.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35515>CVE-2021-35515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.18","packageFilePaths":["/examples/providers/authenticator/pom.xml","/model/jpa/pom.xml","/testsuite/model/pom.xml","/testsuite/integration-arquillian/servers/app-server/undertow/pom.xml","/examples/providers/domain-extension/pom.xml","/testsuite/integration-arquillian/util/pom.xml","/wildfly/server-subsystem/pom.xml","/testsuite/integration-arquillian/servers/auth-server/undertow/pom.xml","/services/pom.xml","/testsuite/utils/pom.xml","/testsuite/integration-arquillian/servers/auth-server/services/testsuite-providers/pom.xml","/wildfly/extensions/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.openshift:openshift-restclient-java:8.0.0.Final;org.apache.commons:commons-compress:1.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35515","vulnerabilityDetails":"When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress\u0027 sevenz package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35515","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-35515 (Medium) detected in commons-compress-1.18.jar - ## CVE-2021-35515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.18.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: keycloak/examples/providers/authenticator/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar</p>
<p>
Dependency Hierarchy:
- openshift-restclient-java-8.0.0.Final.jar (Root Library)
- :x: **commons-compress-1.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35515>CVE-2021-35515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.18","packageFilePaths":["/examples/providers/authenticator/pom.xml","/model/jpa/pom.xml","/testsuite/model/pom.xml","/testsuite/integration-arquillian/servers/app-server/undertow/pom.xml","/examples/providers/domain-extension/pom.xml","/testsuite/integration-arquillian/util/pom.xml","/wildfly/server-subsystem/pom.xml","/testsuite/integration-arquillian/servers/auth-server/undertow/pom.xml","/services/pom.xml","/testsuite/utils/pom.xml","/testsuite/integration-arquillian/servers/auth-server/services/testsuite-providers/pom.xml","/wildfly/extensions/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.openshift:openshift-restclient-java:8.0.0.Final;org.apache.commons:commons-compress:1.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35515","vulnerabilityDetails":"When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress\u0027 sevenz package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35515","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in commons compress jar cve medium severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj path to dependency file keycloak examples providers authenticator pom xml path to vulnerable library home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar home wss scanner repository org apache commons commons compress commons compress jar dependency hierarchy openshift restclient java final jar root library x commons compress jar vulnerable library found in base branch master vulnerability details when reading a specially crafted archive the construction of the list of codecs that decompress an entry can result in an infinite loop this could be used to mount a denial of service attack against services that use compress sevenz package publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com openshift openshift restclient java final org apache commons commons compress isminimumfixversionavailable true minimumfixversion org apache commons commons compress basebranches vulnerabilityidentifier cve vulnerabilitydetails when reading a specially crafted archive the construction of the list of codecs that decompress an entry can result in an infinite loop this could be used to mount a denial of service attack against services that use compress sevenz package vulnerabilityurl | 0 |
139,803 | 11,278,672,644 | IssuesEvent | 2020-01-15 07:29:44 | a2000-erp-team/WEBERP | https://api.github.com/repos/a2000-erp-team/WEBERP | opened | Warehouse->Stk Transaction->Stock Transfer (direct). The next line automatically comes out when accepting the line details. | WEB ERP Testing By Katrina | -Create IM transaction and accepts line item.

When accepts the line item, the next line automatically comes out.

| 1.0 | Warehouse->Stk Transaction->Stock Transfer (direct). The next line automatically comes out when accepting the line details. - -Create IM transaction and accepts line item.

When accepts the line item, the next line automatically comes out.

| non_priority | warehouse stk transaction stock transfer direct the next line automatically comes out when accepting the line details create im transaction and accepts line item when accepts the line item the next line automatically comes out | 0 |
521,774 | 15,115,554,607 | IssuesEvent | 2021-02-09 04:47:20 | PazerOP/tf2_bot_detector | https://api.github.com/repos/PazerOP/tf2_bot_detector | closed | Respect Votekick Cooldown | Priority: Low Type: Enhancement | After the bot detector calls a votekick, there may still be bots on the player's team - either because there aren't other players on the team with the bot detector calling a votekick or because the bot survived the votekick. The bot detector still tries to call votekicks, however, even though a vote cooldown may be preventing a votekick from being succesfully.
The result is a continuing popup blocking part of the screen, along with a failure sound effect, both of which make the player's life harder, for no discernible benefit.
After an auto-votekick is called, the bot detector should refrain from calling votekicks again until either the server has been changed (clearing the cooldown) or a certain amount of time has passed (perhaps configurable, but defaulting to the value on TF2 casual servers, which I believe is 2 minutes).
In addition, it may be helpful to automatically broadcast a message in teamchat to prompt other teammates to kick the bot (much like the program broadcasts a message in normal chat when the enemy team has a bot), including an explanation that the current player's votekick is on cooldown, so they can't call the vote themselves. Perhaps some logic can be used to ensure this doesn't happen in the first few seconds of votekick cooldown, so that it doesn't prompt unnecessary spam in the event that another player's Tf2 Bot Detector is just about to call a votekick. | 1.0 | Respect Votekick Cooldown - After the bot detector calls a votekick, there may still be bots on the player's team - either because there aren't other players on the team with the bot detector calling a votekick or because the bot survived the votekick. The bot detector still tries to call votekicks, however, even though a vote cooldown may be preventing a votekick from being succesfully.
The result is a continuing popup blocking part of the screen, along with a failure sound effect, both of which make the player's life harder, for no discernible benefit.
After an auto-votekick is called, the bot detector should refrain from calling votekicks again until either the server has been changed (clearing the cooldown) or a certain amount of time has passed (perhaps configurable, but defaulting to the value on TF2 casual servers, which I believe is 2 minutes).
In addition, it may be helpful to automatically broadcast a message in teamchat to prompt other teammates to kick the bot (much like the program broadcasts a message in normal chat when the enemy team has a bot), including an explanation that the current player's votekick is on cooldown, so they can't call the vote themselves. Perhaps some logic can be used to ensure this doesn't happen in the first few seconds of votekick cooldown, so that it doesn't prompt unnecessary spam in the event that another player's Tf2 Bot Detector is just about to call a votekick. | priority | respect votekick cooldown after the bot detector calls a votekick there may still be bots on the player s team either because there aren t other players on the team with the bot detector calling a votekick or because the bot survived the votekick the bot detector still tries to call votekicks however even though a vote cooldown may be preventing a votekick from being succesfully the result is a continuing popup blocking part of the screen along with a failure sound effect both of which make the player s life harder for no discernible benefit after an auto votekick is called the bot detector should refrain from calling votekicks again until either the server has been changed clearing the cooldown or a certain amount of time has passed perhaps configurable but defaulting to the value on casual servers which i believe is minutes in addition it may be helpful to automatically broadcast a message in teamchat to prompt other teammates to kick the bot much like the program broadcasts a message in normal chat when the enemy team has a bot including an explanation that the current player s votekick is on cooldown so they can t call the vote themselves perhaps some logic can be used to ensure this doesn t happen in the first few seconds of votekick cooldown so that it doesn t prompt unnecessary spam in the event that another player s bot detector is just about to call a votekick | 1 |
178,172 | 14,662,332,194 | IssuesEvent | 2020-12-29 06:55:41 | preeti13456/CityonBikes | https://api.github.com/repos/preeti13456/CityonBikes | closed | README update | SWoC 2021 documentation enhancement good first issue help wanted | very little information has been provided in README.
Ex: how to run the setup is missing and a little description about the project. @preeti13456 it will be nice of you if can do that.
And can you provide any group link to have a discussion over the issue. | 1.0 | README update - very little information has been provided in README.
Ex: how to run the setup is missing and a little description about the project. @preeti13456 it will be nice of you if can do that.
And can you provide any group link to have a discussion over the issue. | non_priority | readme update very little information has been provided in readme ex how to run the setup is missing and a little description about the project it will be nice of you if can do that and can you provide any group link to have a discussion over the issue | 0 |
576,678 | 17,091,939,665 | IssuesEvent | 2021-07-08 18:43:46 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | opened | Data Dumps not auto-generating | Affects: Data Lead: @cdrini Priority: 2 Type: Bug | Despite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/
### Relevant url?
https://archive.org/details/ol_exports?sort=-publicdate
### Proposal & Constraints
- Run manually for now
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @jimman2003
| 1.0 | Data Dumps not auto-generating - Despite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/
### Relevant url?
https://archive.org/details/ol_exports?sort=-publicdate
### Proposal & Constraints
- Run manually for now
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @jimman2003
| priority | data dumps not auto generating despite being resolved it looks like the data dumps weren t uploaded on july relevant url proposal constraints run manually for now related files stakeholders mekarpeles | 1 |
11,424 | 9,188,220,579 | IssuesEvent | 2019-03-06 06:35:37 | askmench/mench-web-app | https://api.github.com/repos/askmench/mench-web-app | opened | Cache /ledger Page | DB/Server/Infrastructure | As it also has stats and it takes a few seconds to load. best to be cached every 5 min or so. | 1.0 | Cache /ledger Page - As it also has stats and it takes a few seconds to load. best to be cached every 5 min or so. | non_priority | cache ledger page as it also has stats and it takes a few seconds to load best to be cached every min or so | 0 |
209,636 | 16,046,774,766 | IssuesEvent | 2021-04-22 14:27:12 | xamarin/xamarin-macios | https://api.github.com/repos/xamarin/xamarin-macios | opened | Updating to NUnitLite 3.13 breaks the TouchUnit runner | enhancement test-only-issue | Follow up of https://github.com/xamarin/xamarin-macios/pull/11287
The _hierarchy_ of `TestSuite` changed, so no tests are seen without updating the runner
up to 3.12
> `Suite.TestCaseCount` > 0
3.13+
> `Suite.TestCaseCount` == 0
> `Suite.HasChildren` == true
> `Suite.Tests` has a collection of `TestSuite`
That means updating all tests (and possibly other code that interacts, e.g. custom loggers).
This needs to be planned to happen at the _right_ time. | 1.0 | Updating to NUnitLite 3.13 breaks the TouchUnit runner - Follow up of https://github.com/xamarin/xamarin-macios/pull/11287
The _hierarchy_ of `TestSuite` changed, so no tests are seen without updating the runner
up to 3.12
> `Suite.TestCaseCount` > 0
3.13+
> `Suite.TestCaseCount` == 0
> `Suite.HasChildren` == true
> `Suite.Tests` has a collection of `TestSuite`
That means updating all tests (and possibly other code that interacts, e.g. custom loggers).
This needs to be planned to happen at the _right_ time. | non_priority | updating to nunitlite breaks the touchunit runner follow up of the hierarchy of testsuite changed so no tests are seen without updating the runner up to suite testcasecount suite testcasecount suite haschildren true suite tests has a collection of testsuite that means updating all tests and possibly other code that interacts e g custom loggers this needs to be planned to happen at the right time | 0 |
73,351 | 3,411,069,067 | IssuesEvent | 2015-12-04 23:26:05 | peacecorps/medlink | https://api.github.com/repos/peacecorps/medlink | closed | Receipt Confirmation Pop-up on web | Top Priority | _This function triggers a pop-up window asking PCV to confirm receipt of an outstanding order upon first log-in._
*Reasoning*- Right now we ask for confirmation of receipt through sms or through a semi-complicated web process. Responding via SMS costs money to the PCV and requires that they have enough credit on their phones to send an SMS to a US number. This can result in an annoying experience if multiple sms messages are sent and the PCV *can't* respond via sms. So ideally we would limit the amount of SMS asks we send to them and nudge them towards responding on the web if possible.
*Logic*-
This Pop-up alert appears upon login for PCV's who have an outstanding order that has been responded to indicating approval for pick-up or delivery. Note- an pop-up should not appear for responses that have already been marked as received or flagged.

The content for the pop-up includes the text - Did you get your order of {supply 1, supply 2 and supply 3...} that was approved on {response date}? There should be two clean buttons one green with Yes and one Red with No. There should also be the ability to dismiss the message.
The pop up should appear 14 calendar days after the response was sent to them. Same time of day as the time of the response.
If the PCV indicates Yes - The response is marked as received and archived.
If the PCV indicates No - The response is flagged but not archived.
If the PCV dismisses the pop-up - The pop-up should disappear but be triggered again on next log-in. | 1.0 | Receipt Confirmation Pop-up on web - _This function triggers a pop-up window asking PCV to confirm receipt of an outstanding order upon first log-in._
*Reasoning*- Right now we ask for confirmation of receipt through sms or through a semi-complicated web process. Responding via SMS costs money to the PCV and requires that they have enough credit on their phones to send an SMS to a US number. This can result in an annoying experience if multiple sms messages are sent and the PCV *can't* respond via sms. So ideally we would limit the amount of SMS asks we send to them and nudge them towards responding on the web if possible.
*Logic*-
This Pop-up alert appears upon login for PCV's who have an outstanding order that has been responded to indicating approval for pick-up or delivery. Note- an pop-up should not appear for responses that have already been marked as received or flagged.

The content for the pop-up includes the text - Did you get your order of {supply 1, supply 2 and supply 3...} that was approved on {response date}? There should be two clean buttons one green with Yes and one Red with No. There should also be the ability to dismiss the message.
The pop up should appear 14 calendar days after the response was sent to them. Same time of day as the time of the response.
If the PCV indicates Yes - The response is marked as received and archived.
If the PCV indicates No - The response is flagged but not archived.
If the PCV dismisses the pop-up - The pop-up should disappear but be triggered again on next log-in. | priority | receipt confirmation pop up on web this function triggers a pop up window asking pcv to confirm receipt of an outstanding order upon first log in reasoning right now we ask for confirmation of receipt through sms or through a semi complicated web process responding via sms costs money to the pcv and requires that they have enough credit on their phones to send an sms to a us number this can result in an annoying experience if multiple sms messages are sent and the pcv can t respond via sms so ideally we would limit the amount of sms asks we send to them and nudge them towards responding on the web if possible logic this pop up alert appears upon login for pcv s who have an outstanding order that has been responded to indicating approval for pick up or delivery note an pop up should not appear for responses that have already been marked as received or flagged the content for the pop up includes the text did you get your order of supply supply and supply that was approved on response date there should be two clean buttons one green with yes and one red with no there should also be the ability to dismiss the message the pop up should appear calendar days after the response was sent to them same time of day as the time of the response if the pcv indicates yes the response is marked as received and archived if the pcv indicates no the response is flagged but not archived if the pcv dismisses the pop up the pop up should disappear but be triggered again on next log in | 1 |
201,488 | 22,972,497,447 | IssuesEvent | 2022-07-20 05:26:25 | smb-h/nn-lab | https://api.github.com/repos/smb-h/nn-lab | closed | CVE-2022-29216 (High) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl - autoclosed | security vulnerability | ## CVE-2022-29216 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/smb-h/nn-lab/commit/977293e8b3e6b1a0183210a2c32c01f32c53dd6c">977293e8b3e6b1a0183210a2c32c01f32c53dd6c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, TensorFlow's `saved_model_cli` tool is vulnerable to a code injection. This can be used to open a reverse shell. This code path was maintained for compatibility reasons as the maintainers had several test cases where numpy expressions were used as arguments. However, given that the tool is always run manually, the impact of this is still not severe. The maintainers have now removed the `safe=False` argument, so all parsing is done without calling `eval`. The patch is available in versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4.
<p>Publish Date: 2022-05-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29216>CVE-2022-29216</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216</a></p>
<p>Release Date: 2022-05-21</p>
<p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-29216 (High) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl - autoclosed - ## CVE-2022-29216 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/smb-h/nn-lab/commit/977293e8b3e6b1a0183210a2c32c01f32c53dd6c">977293e8b3e6b1a0183210a2c32c01f32c53dd6c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, TensorFlow's `saved_model_cli` tool is vulnerable to a code injection. This can be used to open a reverse shell. This code path was maintained for compatibility reasons as the maintainers had several test cases where numpy expressions were used as arguments. However, given that the tool is always run manually, the impact of this is still not severe. The maintainers have now removed the `safe=False` argument, so all parsing is done without calling `eval`. The patch is available in versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4.
<p>Publish Date: 2022-05-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29216>CVE-2022-29216</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216</a></p>
<p>Release Date: 2022-05-21</p>
<p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tensorflow whl autoclosed cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch main vulnerability details tensorflow is an open source platform for machine learning prior to versions and tensorflow s saved model cli tool is vulnerable to a code injection this can be used to open a reverse shell this code path was maintained for compatibility reasons as the maintainers had several test cases where numpy expressions were used as arguments however given that the tool is always run manually the impact of this is still not severe the maintainers have now removed the safe false argument so all parsing is done without calling eval the patch is available in versions and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend | 0 |
207,017 | 7,123,777,749 | IssuesEvent | 2018-01-19 16:27:56 | striblab/super-bowl-guide | https://api.github.com/repos/striblab/super-bowl-guide | closed | Map links (i.e. with ?view=) seems to do a page reload | high priority | Unsure if this is a caching issue, or ios, or what. | 1.0 | Map links (i.e. with ?view=) seems to do a page reload - Unsure if this is a caching issue, or ios, or what. | priority | map links i e with view seems to do a page reload unsure if this is a caching issue or ios or what | 1 |
485,365 | 13,964,535,978 | IssuesEvent | 2020-10-25 18:32:45 | Mafrans/StadiaPlus | https://api.github.com/repos/Mafrans/StadiaPlus | closed | Forced codec unavailable message wrong | bug low priority | The message `Forced Codec is not available when running in 4K` doesn't include 1440p, but is still locked when 1440p is selected | 1.0 | Forced codec unavailable message wrong - The message `Forced Codec is not available when running in 4K` doesn't include 1440p, but is still locked when 1440p is selected | priority | forced codec unavailable message wrong the message forced codec is not available when running in doesn t include but is still locked when is selected | 1 |
1,302 | 3,845,808,212 | IssuesEvent | 2016-04-05 00:06:53 | moxie-leean/ng-cms | https://api.github.com/repos/moxie-leean/ng-cms | closed | Add a default route to home | process | When the browser points to base_url/ (not base_url/#/), it will display not-found instead of home view. This (or similar) should be added to the router to fix it:
```$urlRouterProvider.when('', '/');``` | 1.0 | Add a default route to home - When the browser points to base_url/ (not base_url/#/), it will display not-found instead of home view. This (or similar) should be added to the router to fix it:
```$urlRouterProvider.when('', '/');``` | non_priority | add a default route to home when the browser points to base url not base url it will display not found instead of home view this or similar should be added to the router to fix it urlrouterprovider when | 0 |
42,034 | 17,015,456,459 | IssuesEvent | 2021-07-02 11:20:58 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Incorrect statement about the validity period of Cluster CA certificates | Pri2 awaiting-product-team-response container-service/svc cxp product-question triaged | The text in the purple frame on this azure-docs page says:
```
AKS clusters created prior to March 2019 have certificates that expire after two years.
Any cluster created after March 2019 or any cluster that has its certificates rotated have
Cluster CA certificates that expire after 30 years.
[...]
you can check the expiration date of your cluster's certificate. For example, the following
Bash command displays the certificate details for the myAKSCluster cluster.
kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'myAKSCluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
```
Our company owns an AKS cluster created on May 20, 2019 (which is obviously after March 2019). The output of the above command looks liike this:
```
[ADMIN]% kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'ace_cluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
Validity
Not Before: May 20 08:05:47 2019 GMT
Not After : May 12 08:15:47 2049 GMT
```
_(Note that the expiration date is not quite _30 calendar years_ into the future, but rather _30 * 365 days_ after the creation. For discussions regarding this, see issue https://github.com/MicrosoftDocs/azure-docs/issues/75550 .)_
However, on May 19 08:15:47 2021 GMT our AKS cluster certificate stopped working, exactly 2 * 365 days after its creation!
We were forced to recycle the certificate as soon as possible, giving very short notice to our clients.
This caused us major embarassment with our clients and was a big inconvenience for some of them.
Now, here is the output from the command above for the _new_ certificate:
```
[ADMIN]% kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'ace_cluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
Validity
Not Before: May 19 13:20:33 2021 GMT
Not After : May 19 13:30:33 2051 GMT
```
At first glance, one might be fooled into thinking that we now finally have a 30-year certificate on our cluster!
But, alas, the certificate's validity is still only _two years_, as shown by this call to `openssl -showcerts`:
```
[ADMIN]% openssl s_client -showcerts -connect maze-prod-9e852cf7.hcp.westeurope.azmk8s.io:443 2>/dev/null | openssl x509 -noout -dates
notBefore=May 19 13:20:33 2021 GMT
notAfter=May 19 13:30:33 2023 GMT
```
To summarize: It appears that the 30-year convention described in the purple frame on the certificate-rotation documentation page is not actually active, but AKS still asserts that it is, when queried about a specific certificate. So what is going on here?
And is there any way to make a cluster certificate valid for more than two years at a time?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9b78509d-73eb-6c32-aee1-9b34685a64ff
* Version Independent ID: 6f87cf70-1f5a-0f9e-862d-ae1436595f3b
* Content: [Rotate certificates in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/certificate-rotation)
* Content Source: [articles/aks/certificate-rotation.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/certificate-rotation.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | 1.0 | Incorrect statement about the validity period of Cluster CA certificates - The text in the purple frame on this azure-docs page says:
```
AKS clusters created prior to March 2019 have certificates that expire after two years.
Any cluster created after March 2019 or any cluster that has its certificates rotated have
Cluster CA certificates that expire after 30 years.
[...]
you can check the expiration date of your cluster's certificate. For example, the following
Bash command displays the certificate details for the myAKSCluster cluster.
kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'myAKSCluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
```
Our company owns an AKS cluster created on May 20, 2019 (which is obviously after March 2019). The output of the above command looks liike this:
```
[ADMIN]% kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'ace_cluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
Validity
Not Before: May 20 08:05:47 2019 GMT
Not After : May 12 08:15:47 2049 GMT
```
_(Note that the expiration date is not quite _30 calendar years_ into the future, but rather _30 * 365 days_ after the creation. For discussions regarding this, see issue https://github.com/MicrosoftDocs/azure-docs/issues/75550 .)_
However, on May 19 08:15:47 2021 GMT our AKS cluster certificate stopped working, exactly 2 * 365 days after its creation!
We were forced to recycle the certificate as soon as possible, giving very short notice to our clients.
This caused us major embarassment with our clients and was a big inconvenience for some of them.
Now, here is the output from the command above for the _new_ certificate:
```
[ADMIN]% kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'ace_cluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
Validity
Not Before: May 19 13:20:33 2021 GMT
Not After : May 19 13:30:33 2051 GMT
```
At first glance, one might be fooled into thinking that we now finally have a 30-year certificate on our cluster!
But, alas, the certificate's validity is still only _two years_, as shown by this call to `openssl -showcerts`:
```
[ADMIN]% openssl s_client -showcerts -connect maze-prod-9e852cf7.hcp.westeurope.azmk8s.io:443 2>/dev/null | openssl x509 -noout -dates
notBefore=May 19 13:20:33 2021 GMT
notAfter=May 19 13:30:33 2023 GMT
```
To summarize: It appears that the 30-year convention described in the purple frame on the certificate-rotation documentation page is not actually active, but AKS still asserts that it is, when queried about a specific certificate. So what is going on here?
And is there any way to make a cluster certificate valid for more than two years at a time?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9b78509d-73eb-6c32-aee1-9b34685a64ff
* Version Independent ID: 6f87cf70-1f5a-0f9e-862d-ae1436595f3b
* Content: [Rotate certificates in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/certificate-rotation)
* Content Source: [articles/aks/certificate-rotation.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/certificate-rotation.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | non_priority | incorrect statement about the validity period of cluster ca certificates the text in the purple frame on this azure docs page says aks clusters created prior to march have certificates that expire after two years any cluster created after march or any cluster that has its certificates rotated have cluster ca certificates that expire after years you can check the expiration date of your cluster s certificate for example the following bash command displays the certificate details for the myakscluster cluster kubectl config view raw o jsonpath clusters cluster certificate authority data d openssl text grep validity our company owns an aks cluster created on may which is obviously after march the output of the above command looks liike this kubectl config view raw o jsonpath clusters cluster certificate authority data d openssl text grep validity validity not before may gmt not after may gmt note that the expiration date is not quite calendar years into the future but rather days after the creation for discussions regarding this see issue however on may gmt our aks cluster certificate stopped working exactly days after its creation we were forced to recycle the certificate as soon as possible giving very short notice to our clients this caused us major embarassment with our clients and was a big inconvenience for some of them now here is the output from the command above for the new certificate kubectl config view raw o jsonpath clusters cluster certificate authority data d openssl text grep validity validity not before may gmt not after may gmt at first glance one might be fooled into thinking that we now finally have a year certificate on our cluster but alas the certificate s validity is still only two years as shown by this call to openssl showcerts openssl s client showcerts connect maze prod hcp westeurope io dev null openssl noout dates notbefore may gmt notafter may gmt to summarize it appears that the year convention described in the purple frame on the certificate rotation documentation page is not actually active but aks still asserts that it is when queried about a specific certificate so what is going on here and is there any way to make a cluster certificate valid for more than two years at a time document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned | 0 |
279,982 | 8,676,650,263 | IssuesEvent | 2018-11-30 14:43:19 | BSc-Computer-Science/rentacar-internal-manager | https://api.github.com/repos/BSc-Computer-Science/rentacar-internal-manager | closed | Add CRUD feature | priority-5 | A user must be able to create/edit/remove a specific vehicle. Although the screen itself is finished, the request is not being made yet. | 1.0 | Add CRUD feature - A user must be able to create/edit/remove a specific vehicle. Although the screen itself is finished, the request is not being made yet. | priority | add crud feature a user must be able to create edit remove a specific vehicle although the screen itself is finished the request is not being made yet | 1 |
329,812 | 10,024,795,732 | IssuesEvent | 2019-07-16 23:14:00 | Sage-Bionetworks/dccvalidator-app | https://api.github.com/repos/Sage-Bionetworks/dccvalidator-app | closed | Upload files to private area in Synapse | high priority | App should create a folder for the researcher in some staging/cache area in Synapse. This folder should be accessible only to the AMP-AD curation team (and the user themself). Each time the user uploads data to the app, it should upload to this folder behind the scenes (either as a new file or as a new version of an existing file, as appropriate).
This will allow the curation team to keep track of progress/status of pending data. Importantly, this is _not_ the way the user will ultimately upload their data, just a way for us to check progress without keeping files hanging around on the shiny server. When they get the green light, they will still upload using the manifest. | 1.0 | Upload files to private area in Synapse - App should create a folder for the researcher in some staging/cache area in Synapse. This folder should be accessible only to the AMP-AD curation team (and the user themself). Each time the user uploads data to the app, it should upload to this folder behind the scenes (either as a new file or as a new version of an existing file, as appropriate).
This will allow the curation team to keep track of progress/status of pending data. Importantly, this is _not_ the way the user will ultimately upload their data, just a way for us to check progress without keeping files hanging around on the shiny server. When they get the green light, they will still upload using the manifest. | priority | upload files to private area in synapse app should create a folder for the researcher in some staging cache area in synapse this folder should be accessible only to the amp ad curation team and the user themself each time the user uploads data to the app it should upload to this folder behind the scenes either as a new file or as a new version of an existing file as appropriate this will allow the curation team to keep track of progress status of pending data importantly this is not the way the user will ultimately upload their data just a way for us to check progress without keeping files hanging around on the shiny server when they get the green light they will still upload using the manifest | 1 |
285,146 | 8,755,152,201 | IssuesEvent | 2018-12-14 14:03:05 | bio-tools/biotoolsRegistry | https://api.github.com/repos/bio-tools/biotoolsRegistry | closed | Entries that have EDAM terms with no uri | bug content critical priority fix verified | As discussed, there are some entries (mainly coming from EMBOSS) that have EDAM terms with no URI which is invisible to users in the tool card (e.g. https://bio.tools/api/cutseq and https://bio.tools/api/banana). | 1.0 | Entries that have EDAM terms with no uri - As discussed, there are some entries (mainly coming from EMBOSS) that have EDAM terms with no URI which is invisible to users in the tool card (e.g. https://bio.tools/api/cutseq and https://bio.tools/api/banana). | priority | entries that have edam terms with no uri as discussed there are some entries mainly coming from emboss that have edam terms with no uri which is invisible to users in the tool card e g and | 1 |
296,383 | 9,114,665,293 | IssuesEvent | 2019-02-22 01:16:31 | DancesportSoftware/das-frontend | https://api.github.com/repos/DancesportSoftware/das-frontend | opened | Refresh Firebase Auth Token | Priority 2: Medium enhancement | Currently, the front-end does not auto-refresh user's authentication token when the token is expired. When the token is expired, the user will have to re-login before the user can perform any account-related activities on DAS. The authentication middleware and the auth store should together handle refreshing the auth token once it expires. | 1.0 | Refresh Firebase Auth Token - Currently, the front-end does not auto-refresh user's authentication token when the token is expired. When the token is expired, the user will have to re-login before the user can perform any account-related activities on DAS. The authentication middleware and the auth store should together handle refreshing the auth token once it expires. | priority | refresh firebase auth token currently the front end does not auto refresh user s authentication token when the token is expired when the token is expired the user will have to re login before the user can perform any account related activities on das the authentication middleware and the auth store should together handle refreshing the auth token once it expires | 1 |
203,138 | 15,351,119,272 | IssuesEvent | 2021-03-01 04:16:45 | hashgraph/hedera-services | https://api.github.com/repos/hashgraph/hedera-services | closed | Add HTS operations to UmbrellaRedux | Test Development | **Summary**
Inject random combinations of HTS operations in the `UmbrellaRedux` test suite.
- [x] `TokenCreate`
- [x] `TokenFreezeAccount`
- [x] `TokenUnfreezeAccount`
- [x] `TokenGrantKycToAccount`
- [x] `TokenRevokeKycFromAccount`
- [x] `TokenDelete`
- [x] `TokenMint`
- [x] `TokenBurn`
- [x] `TokenAccountWipe`
- [x] `TokenUpdate`
- [x] `TokenGetInfo`
- [x] `TokenAssociateToAccount`
- [x] `TokenDissociateFromAccount`
- [x] `CryptoTransfer` w/ tokens
**Possible resolution**
- Write `OpProvider` implementations that generate the above operations.
- Add instances of these implementations to the `RegressionProviderFactory`.
- Update e.g. _regression-mixed_ops.properties_ to give reasonable biases toward the new `OpProvider`s.
| 1.0 | Add HTS operations to UmbrellaRedux - **Summary**
Inject random combinations of HTS operations in the `UmbrellaRedux` test suite.
- [x] `TokenCreate`
- [x] `TokenFreezeAccount`
- [x] `TokenUnfreezeAccount`
- [x] `TokenGrantKycToAccount`
- [x] `TokenRevokeKycFromAccount`
- [x] `TokenDelete`
- [x] `TokenMint`
- [x] `TokenBurn`
- [x] `TokenAccountWipe`
- [x] `TokenUpdate`
- [x] `TokenGetInfo`
- [x] `TokenAssociateToAccount`
- [x] `TokenDissociateFromAccount`
- [x] `CryptoTransfer` w/ tokens
**Possible resolution**
- Write `OpProvider` implementations that generate the above operations.
- Add instances of these implementations to the `RegressionProviderFactory`.
- Update e.g. _regression-mixed_ops.properties_ to give reasonable biases toward the new `OpProvider`s.
| non_priority | add hts operations to umbrellaredux summary inject random combinations of hts operations in the umbrellaredux test suite tokencreate tokenfreezeaccount tokenunfreezeaccount tokengrantkyctoaccount tokenrevokekycfromaccount tokendelete tokenmint tokenburn tokenaccountwipe tokenupdate tokengetinfo tokenassociatetoaccount tokendissociatefromaccount cryptotransfer w tokens possible resolution write opprovider implementations that generate the above operations add instances of these implementations to the regressionproviderfactory update e g regression mixed ops properties to give reasonable biases toward the new opprovider s | 0 |
666,737 | 22,365,752,930 | IssuesEvent | 2022-06-16 03:44:36 | oasis-engine/engine | https://api.github.com/repos/oasis-engine/engine | closed | CharacterController from PhysX | feature physical high priority | CharacterController is a kind of component from PhysX which will not respond to Newton's Law but the Input Controller.
| 1.0 | CharacterController from PhysX - CharacterController is a kind of component from PhysX which will not respond to Newton's Law but the Input Controller.
| priority | charactercontroller from physx charactercontroller is a kind of component from physx which will not respond to newton s law but the input controller | 1 |
100,850 | 16,490,519,550 | IssuesEvent | 2021-05-25 02:33:02 | turkdevops/grafana | https://api.github.com/repos/turkdevops/grafana | opened | CVE-2021-23383 (High) detected in handlebars-4.4.3.tgz | security vulnerability | ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.3.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz</a></p>
<p>Path to dependency file: grafana/yarn.lock</p>
<p>Path to vulnerable library: grafana/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.8.0.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23383 (High) detected in handlebars-4.4.3.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.3.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz</a></p>
<p>Path to dependency file: grafana/yarn.lock</p>
<p>Path to vulnerable library: grafana/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.8.0.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file grafana yarn lock path to vulnerable library grafana node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library found in base branch datasource meta vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
40,594 | 20,984,024,829 | IssuesEvent | 2022-03-28 23:43:06 | microsoft/STL | https://api.github.com/repos/microsoft/STL | closed | `<format>`: Could the output of `__std_get_cvt` be cached? | performance format | #2493 changed the constructor of `_Fmt_codec_base<false>` to always call `__std_get_cvt` with a codepage representing the execution character set (`_MSVC_EXECUTION_CHARACTER_SET` or `65001`). Since the execution character set does not change for the lifetime of a program, could the information be cached?
This would save a lot of calls to `GetCPInfoExW` when the execution character set is not self-synchronizing. | True | `<format>`: Could the output of `__std_get_cvt` be cached? - #2493 changed the constructor of `_Fmt_codec_base<false>` to always call `__std_get_cvt` with a codepage representing the execution character set (`_MSVC_EXECUTION_CHARACTER_SET` or `65001`). Since the execution character set does not change for the lifetime of a program, could the information be cached?
This would save a lot of calls to `GetCPInfoExW` when the execution character set is not self-synchronizing. | non_priority | could the output of std get cvt be cached changed the constructor of fmt codec base to always call std get cvt with a codepage representing the execution character set msvc execution character set or since the execution character set does not change for the lifetime of a program could the information be cached this would save a lot of calls to getcpinfoexw when the execution character set is not self synchronizing | 0 |
188,024 | 6,767,491,582 | IssuesEvent | 2017-10-26 03:43:00 | CS2103AUG2017-T13-B1/main | https://api.github.com/repos/CS2103AUG2017-T13-B1/main | closed | Completing the autofill feature | Priority: Medium Status: Completed | - To include DOB class in the autofill
- Complete tests (stub and system)
- Update documentation | 1.0 | Completing the autofill feature - - To include DOB class in the autofill
- Complete tests (stub and system)
- Update documentation | priority | completing the autofill feature to include dob class in the autofill complete tests stub and system update documentation | 1 |
122,840 | 4,846,067,422 | IssuesEvent | 2016-11-10 10:29:25 | tiliado/nuvolaplayer | https://api.github.com/repos/tiliado/nuvolaplayer | closed | Add MPRIS rating | postponed priority low wishlist | From [Launchpad bug #1064245](https://bugs.launchpad.net/nuvola-player/+bug/1064245) by Jonas Frei:
When using gnome-shell and the media player extension, there is the possibility to get/set ratings with the media player extension (but its not part of the MPRIS interface AFAIK).
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/6814777-add-mpris-rating?utm_campaign=plugin&utm_content=tracker%2F2538125&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F2538125&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| 1.0 | Add MPRIS rating - From [Launchpad bug #1064245](https://bugs.launchpad.net/nuvola-player/+bug/1064245) by Jonas Frei:
When using gnome-shell and the media player extension, there is the possibility to get/set ratings with the media player extension (but its not part of the MPRIS interface AFAIK).
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/6814777-add-mpris-rating?utm_campaign=plugin&utm_content=tracker%2F2538125&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F2538125&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| priority | add mpris rating from by jonas frei when using gnome shell and the media player extension there is the possibility to get set ratings with the media player extension but its not part of the mpris interface afaik want to back this issue we accept bounties via | 1 |
225,296 | 24,823,812,815 | IssuesEvent | 2022-10-25 18:45:29 | gravitational/gravity | https://api.github.com/repos/gravitational/gravity | closed | Move to distroless containers | kind/security fedramp/ca-7 | We should move to distroless containers for our system applications that will be a part of FISMA compliant builds. | True | Move to distroless containers - We should move to distroless containers for our system applications that will be a part of FISMA compliant builds. | non_priority | move to distroless containers we should move to distroless containers for our system applications that will be a part of fisma compliant builds | 0 |
432,332 | 12,491,313,886 | IssuesEvent | 2020-06-01 03:35:39 | Badwater-Apps/github-label-manager-2 | https://api.github.com/repos/Badwater-Apps/github-label-manager-2 | closed | Redesign UI: Single-column design | complexity: 3/5 lang: html lang: javascript priority: high status: in progress type: enhancement type: feature request | # Issue
Re-do UI: Single-column design
## New UI
- Single column
- Card organization from top to bottom:
- "Login" card, the one that contains input fields of repo owner, repo, username, and personal access token
- "Copy from other repos" card
- "Labels/Milestones/FAQ management card
This new single-column UI will increase the usable space on the website.
Many elements will have a greater width.
For example, cards of labels and milestones will be wider. It will be possible to fit label name, label description, and label color into one line.
Milestone can still take up 2 lines.
You have some room in designing the UI.
| 1.0 | Redesign UI: Single-column design - # Issue
Re-do UI: Single-column design
## New UI
- Single column
- Card organization from top to bottom:
- "Login" card, the one that contains input fields of repo owner, repo, username, and personal access token
- "Copy from other repos" card
- "Labels/Milestones/FAQ management card
This new single-column UI will increase the usable space on the website.
Many elements will have a greater width.
For example, cards of labels and milestones will be wider. It will be possible to fit label name, label description, and label color into one line.
Milestone can still take up 2 lines.
You have some room in designing the UI.
| priority | redesign ui single column design issue re do ui single column design new ui single column card organization from top to bottom login card the one that contains input fields of repo owner repo username and personal access token copy from other repos card labels milestones faq management card this new single column ui will increase the usable space on the website many elements will have a greater width for example cards of labels and milestones will be wider it will be possible to fit label name label description and label color into one line milestone can still take up lines you have some room in designing the ui | 1 |
79,782 | 9,954,492,237 | IssuesEvent | 2019-07-05 08:33:26 | known-project/mobile-application | https://api.github.com/repos/known-project/mobile-application | opened | login page in action + save cookie | component-design feature server-connection | * check if any cookie avail
* check cookie is valid
* if not, show login form
* get new cookie with this user pass
* save new cookie to permanent storage
* go to time table page | 1.0 | login page in action + save cookie - * check if any cookie avail
* check cookie is valid
* if not, show login form
* get new cookie with this user pass
* save new cookie to permanent storage
* go to time table page | non_priority | login page in action save cookie check if any cookie avail check cookie is valid if not show login form get new cookie with this user pass save new cookie to permanent storage go to time table page | 0 |
506,880 | 14,675,077,626 | IssuesEvent | 2020-12-30 16:44:09 | tellor-io/telliot | https://api.github.com/repos/tellor-io/telliot | closed | Refactor to remove `NodeURL` and `PrivateKey` from the config struct | difficulty: good first issue help wanted priority: medium type: clean up | I thought we removed these in https://github.com/tellor-io/telliot/pull/317, but it seems I overlooked this with my review. If we have this in the config struct this will still allow people to set it without an env file and later when we enable strict parsing this is error prone and not detirministic.
Instead, we need to completely remove it from the config struct and just load these when needed from the `EnvFile` config file path.
Tl-Dr - remove these https://github.com/tellor-io/telliot/blob/master/pkg/config/config.go#L61 , https://github.com/tellor-io/telliot/blob/master/pkg/config/config.go#L92 | 1.0 | Refactor to remove `NodeURL` and `PrivateKey` from the config struct - I thought we removed these in https://github.com/tellor-io/telliot/pull/317, but it seems I overlooked this with my review. If we have this in the config struct this will still allow people to set it without an env file and later when we enable strict parsing this is error prone and not detirministic.
Instead, we need to completely remove it from the config struct and just load these when needed from the `EnvFile` config file path.
Tl-Dr - remove these https://github.com/tellor-io/telliot/blob/master/pkg/config/config.go#L61 , https://github.com/tellor-io/telliot/blob/master/pkg/config/config.go#L92 | priority | refactor to remove nodeurl and privatekey from the config struct i thought we removed these in but it seems i overlooked this with my review if we have this in the config struct this will still allow people to set it without an env file and later when we enable strict parsing this is error prone and not detirministic instead we need to completely remove it from the config struct and just load these when needed from the envfile config file path tl dr remove these | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.