Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
109,472 | 13,777,745,700 | IssuesEvent | 2020-10-08 11:25:37 | CatalogueOfLife/clearinghouse-ui | https://api.github.com/repos/CatalogueOfLife/clearinghouse-ui | closed | Reduce size of source dataset in public tree | design tiny | When there is a single source dataset for a taxon in the public tree, the source is shown in the same font size as the name and it convolutes the tree of names:
<img width="1199" alt="Screenshot 2020-10-07 at 13 11 06" src="https://user-images.githubusercontent.com/327505/95323529-9d30ad00-089e-11eb-94b0-087936e30085.png">
Use the smaller font size applied when there are multiple sources consistently also for a single source | 1.0 | Reduce size of source dataset in public tree - When there is a single source dataset for a taxon in the public tree, the source is shown in the same font size as the name and it convolutes the tree of names:
<img width="1199" alt="Screenshot 2020-10-07 at 13 11 06" src="https://user-images.githubusercontent.com/327505/95323529-9d30ad00-089e-11eb-94b0-087936e30085.png">
Use the smaller font size applied when there are multiple sources consistently also for a single source | non_priority | reduce size of source dataset in public tree when there is a single source dataset for a taxon in the public tree the source is shown in the same font size as the name and it convolutes the tree of names img width alt screenshot at src use the smaller font size applied when there are multiple sources consistently also for a single source | 0 |
259,892 | 8,201,226,191 | IssuesEvent | 2018-09-01 15:10:37 | clearlydefined/website | https://api.github.com/repos/clearlydefined/website | closed | Details view | high-priority | Need a reusable component that shows the full detail of a definition. It should optionally allow editing of the values in definition in a way that is shareable with other components. That is, the detail component described here does not "own" the definition. This component will show up in two primary places/ways
1. as a popup/overlay on the main Definitions page when the user wants to dive into a given definition in the list
1. as an independent page with a deep link URL allowing users to talk about definitions and reference them easily
A mockup of the page is in the jm/detailsPage branch. This is just a **mockup** and is not intended to be the implementation. There is tons of duplicate code etc
some key points
* the details component should be standalone so it can be used in page, an overlay, ...
* Over time we will undoubtedly redesign the layout.
* As the user progresses from the two line entry to the expanded panel to the full details view of the component, the experience should be smooth. With visual and content continuity. They are just looking at more data about the same thing so the data they have already seen should be similar.
There are a few new components and related features that will show up here
* Facet picker (https://github.com/clearlydefined/website/issues/202)
* File List (https://github.com/clearlydefined/website/issues/191)
* Editing (https://github.com/clearlydefined/website/issues/196)
| 1.0 | Details view - Need a reusable component that shows the full detail of a definition. It should optionally allow editing of the values in definition in a way that is shareable with other components. That is, the detail component described here does not "own" the definition. This component will show up in two primary places/ways
1. as a popup/overlay on the main Definitions page when the user wants to dive into a given definition in the list
1. as an independent page with a deep link URL allowing users to talk about definitions and reference them easily
A mockup of the page is in the jm/detailsPage branch. This is just a **mockup** and is not intended to be the implementation. There is tons of duplicate code etc
some key points
* the details component should be standalone so it can be used in page, an overlay, ...
* Over time we will undoubtedly redesign the layout.
* As the user progresses from the two line entry to the expanded panel to the full details view of the component, the experience should be smooth. With visual and content continuity. They are just looking at more data about the same thing so the data they have already seen should be similar.
There are a few new components and related features that will show up here
* Facet picker (https://github.com/clearlydefined/website/issues/202)
* File List (https://github.com/clearlydefined/website/issues/191)
* Editing (https://github.com/clearlydefined/website/issues/196)
| priority | details view need a reusable component that shows the full detail of a definition it should optionally allow editing of the values in definition in a way that is shareable with other components that is the detail component described here does not own the definition this component will show up in two primary places ways as a popup overlay on the main definitions page when the user wants to dive into a given definition in the list as an independent page with a deep link url allowing users to talk about definitions and reference them easily a mockup of the page is in the jm detailspage branch this is just a mockup and is not intended to be the implementation there is tons of duplicate code etc some key points the details component should be standalone so it can be used in page an overlay over time we will undoubtedly redesign the layout as the user progresses from the two line entry to the expanded panel to the full details view of the component the experience should be smooth with visual and content continuity they are just looking at more data about the same thing so the data they have already seen should be similar there are a few new components and related features that will show up here facet picker file list editing | 1 |
19,711 | 10,418,375,554 | IssuesEvent | 2019-09-15 08:02:10 | tiarebalbi/tiarebalbi-website | https://api.github.com/repos/tiarebalbi/tiarebalbi-website | closed | WS-2018-0236 (Medium) detected in mem-1.1.0.tgz | security vulnerability | ## WS-2018-0236 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/tiarebalbi-website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/tiarebalbi-website/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-15.13.24.tgz (Root Library)
- npm-5.1.15.tgz
- npm-6.11.3.tgz
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tiarebalbi/tiarebalbi-website/commit/532baf4ecd6f7b333f9b9bc791f7c4c86e29a436">532baf4ecd6f7b333f9b9bc791f7c4c86e29a436</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
<p>Publish Date: 2019-05-30
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1623744>WS-2018-0236</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1623744">https://bugzilla.redhat.com/show_bug.cgi?id=1623744</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0236 (Medium) detected in mem-1.1.0.tgz - ## WS-2018-0236 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/tiarebalbi-website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/tiarebalbi-website/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-15.13.24.tgz (Root Library)
- npm-5.1.15.tgz
- npm-6.11.3.tgz
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tiarebalbi/tiarebalbi-website/commit/532baf4ecd6f7b333f9b9bc791f7c4c86e29a436">532baf4ecd6f7b333f9b9bc791f7c4c86e29a436</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
<p>Publish Date: 2019-05-30
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1623744>WS-2018-0236</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1623744">https://bugzilla.redhat.com/show_bug.cgi?id=1623744</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in mem tgz ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file tmp ws scm tiarebalbi website package json path to vulnerable library tmp ws scm tiarebalbi website node modules npm node modules mem package json dependency hierarchy semantic release tgz root library npm tgz npm tgz libnpx tgz yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href vulnerability details in nodejs mem before version there is a memory leak due to old results not being removed from the cache despite reaching maxage exploitation of this can lead to exhaustion of memory and subsequent denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
262,919 | 8,272,608,617 | IssuesEvent | 2018-09-16 22:04:08 | javaee/glassfish | https://api.github.com/repos/javaee/glassfish | closed | [Blocking] Unable to update a GF Server 3.0.1 full distribution to 3.0.1.1 from Support Repository - Package Violation Constraint | 3_1_1-scrubbed Component: packaging Priority: Minor Type: Bug | The problem is seen on a community installation of GlassFish Server 3.0.1 and the attempt to Update it through the Oracle Support repository and to version Oracle GF Server 3.0.1 patch1\. After installation, the system is configured as follows:
* cd <GlassFish_Installation>
* bin/pkg set-publisher -P -k $HOME/Oracle_Glassfish_Server_3_Support.key.pem\
-c $HOME/Oracle_Glassfish_Server_3_Support.certificate.pem\
-O [https://pkg.oracle.com/glassfish/v3/support](https://pkg.oracle.com/glassfish/v3/support) release.glassfish.oracle.com
During the update, the following error is reported:
C:\glassfishv3\bin>pkg install glassfish-full-incorporation@3.0.1.1 glassfish-enterprise-full-profile@3.0.1.1
Creating Plan -
pkg: The following package(s) violated constraints:
Package pkg:/mq-server@4.5,5.11 conflicts with constraint in installed p
kg:/glassfish-full-incorporation:
Pkg mq-server: Optional min_version: 4.4,5.11 max version: 4.4,5
.11 defined by: pkg:/glassfish-full-incorporation
the process stops the Update is not performed.
This problem is not seen when Updating from an Oracle GF server 3.0.1 distribution. Only with the community bits (glassfish-3.0.1-b22-unix.sh).
#### Environment
WinXP, Installation of glassfish-3.0.1-b22-unix.sh, Support Repository [https://pkg.oracle.com/glassfish/v3/support](https://pkg.oracle.com/glassfish/v3/support), update to 3.0.1 patch 1 version, Firefox browser 3.6.17 | 1.0 | [Blocking] Unable to update a GF Server 3.0.1 full distribution to 3.0.1.1 from Support Repository - Package Violation Constraint - The problem is seen on a community installation of GlassFish Server 3.0.1 and the attempt to Update it through the Oracle Support repository and to version Oracle GF Server 3.0.1 patch1\. After installation, the system is configured as follows:
* cd <GlassFish_Installation>
* bin/pkg set-publisher -P -k $HOME/Oracle_Glassfish_Server_3_Support.key.pem\
-c $HOME/Oracle_Glassfish_Server_3_Support.certificate.pem\
-O [https://pkg.oracle.com/glassfish/v3/support](https://pkg.oracle.com/glassfish/v3/support) release.glassfish.oracle.com
During the update, the following error is reported:
C:\glassfishv3\bin>pkg install glassfish-full-incorporation@3.0.1.1 glassfish-enterprise-full-profile@3.0.1.1
Creating Plan -
pkg: The following package(s) violated constraints:
Package pkg:/mq-server@4.5,5.11 conflicts with constraint in installed p
kg:/glassfish-full-incorporation:
Pkg mq-server: Optional min_version: 4.4,5.11 max version: 4.4,5
.11 defined by: pkg:/glassfish-full-incorporation
the process stops the Update is not performed.
This problem is not seen when Updating from an Oracle GF server 3.0.1 distribution. Only with the community bits (glassfish-3.0.1-b22-unix.sh).
#### Environment
WinXP, Installation of glassfish-3.0.1-b22-unix.sh, Support Repository [https://pkg.oracle.com/glassfish/v3/support](https://pkg.oracle.com/glassfish/v3/support), update to 3.0.1 patch 1 version, Firefox browser 3.6.17 | priority | unable to update a gf server full distribution to from support repository package violation constraint the problem is seen on a community installation of glassfish server and the attempt to update it through the oracle support repository and to version oracle gf server after installation the system is configured as follows cd bin pkg set publisher p k home oracle glassfish server support key pem c home oracle glassfish server support certificate pem o release glassfish oracle com during the update the following error is reported c bin pkg install glassfish full incorporation glassfish enterprise full profile creating plan pkg the following package s violated constraints package pkg mq server conflicts with constraint in installed p kg glassfish full incorporation pkg mq server optional min version max version defined by pkg glassfish full incorporation the process stops the update is not performed this problem is not seen when updating from an oracle gf server distribution only with the community bits glassfish unix sh environment winxp installation of glassfish unix sh support repository update to patch version firefox browser | 1 |
239,556 | 7,799,805,308 | IssuesEvent | 2018-06-09 00:42:41 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0005528:
update content_sequence of container on record move | Feature Request Mantis Tinebase high priority | **Reported by pschuele on 3 Feb 2012 13:45**
update content_sequence of container on record move
both containers need to be updated
| 1.0 | 0005528:
update content_sequence of container on record move - **Reported by pschuele on 3 Feb 2012 13:45**
update content_sequence of container on record move
both containers need to be updated
| priority | update content sequence of container on record move reported by pschuele on feb update content sequence of container on record move both containers need to be updated | 1 |
116,785 | 4,706,562,710 | IssuesEvent | 2016-10-13 17:32:24 | google/grr | https://api.github.com/repos/google/grr | opened | MySQL datastore Warning: Duplicate entry | bug Priority-Low | When running the tests on mysql 5.7 (installed from xenial ubuntu), I get tons of warnings. Since this code hasn't changed in a while I suspect this is just MySQL being more noisy in the newer version.
Sean could you take a look when you get a chance? Might be we need to give some people guidance on how to disable these warnings.
```
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\i\xC1\x18B\xB3\xC7\xB4\x09\xB5\xE2L\xB2_\xCDU' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xB7\x91h\xAE5\xDD}B)J6q~\x19\x9F\xA1' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xB3M\xD7r\xDD_\xEA\x8A\xAA\xBAyl\xEF-\x08\x8E' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry 'QN-\xE0_\x02\xAA\xE3\xDE\xAA\x15z\x12\xDC$\xCA' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\x96\xA8guq\x96\xAB\xC8t`\xF6@\xB0x\xBC\xFA' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\x82y\x1B\x9E\xF0\x15\x11A\x94\x0E\x85\xCE\xA1\xCC`\xA0' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xE5\xD4\x1F\x1F!\xC9t_w\xB3\xAC\x1A\xBD\xE3\xA8\x82' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
Test name: MysqlAdvancedDataStoreTest
...........................................................
----------------------------------------------------------------------
Ran 59 tests in 36.773s
OK
``` | 1.0 | MySQL datastore Warning: Duplicate entry - When running the tests on mysql 5.7 (installed from xenial ubuntu), I get tons of warnings. Since this code hasn't changed in a while I suspect this is just MySQL being more noisy in the newer version.
Sean could you take a look when you get a chance? Might be we need to give some people guidance on how to disable these warnings.
```
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\i\xC1\x18B\xB3\xC7\xB4\x09\xB5\xE2L\xB2_\xCDU' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xB7\x91h\xAE5\xDD}B)J6q~\x19\x9F\xA1' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xB3M\xD7r\xDD_\xEA\x8A\xAA\xBAyl\xEF-\x08\x8E' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry 'QN-\xE0_\x02\xAA\xE3\xDE\xAA\x15z\x12\xDC$\xCA' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\x96\xA8guq\x96\xAB\xC8t`\xF6@\xB0x\xBC\xFA' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\x82y\x1B\x9E\xF0\x15\x11A\x94\x0E\x85\xCE\xA1\xCC`\xA0' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
/grr/grr/grr/lib/data_stores/mysql_advanced_data_store.py:596: Warning: Duplicate entry '\xE5\xD4\x1F\x1F!\xC9t_w\xB3\xAC\x1A\xBD\xE3\xA8\x82' for key 'PRIMARY'
connection.cursor.execute(query["query"], query["args"])
Test name: MysqlAdvancedDataStoreTest
...........................................................
----------------------------------------------------------------------
Ran 59 tests in 36.773s
OK
``` | priority | mysql datastore warning duplicate entry when running the tests on mysql installed from xenial ubuntu i get tons of warnings since this code hasn t changed in a while i suspect this is just mysql being more noisy in the newer version sean could you take a look when you get a chance might be we need to give some people guidance on how to disable these warnings grr grr grr lib data stores mysql advanced data store py warning duplicate entry i xcdu for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry xdd b for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry xdd xea xaa xbayl xef for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry qn xaa xde xaa xdc xca for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry xab xbc xfa for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry xce xcc for key primary connection cursor execute query query grr grr grr lib data stores mysql advanced data store py warning duplicate entry w xac xbd for key primary connection cursor execute query query test name mysqladvanceddatastoretest ran tests in ok | 1 |
341,723 | 24,709,924,056 | IssuesEvent | 2022-10-19 23:06:23 | PinataCloud/Pinata-SDK | https://api.github.com/repos/PinataCloud/Pinata-SDK | closed | Question: Can I pre-calculate the the CID returned by pinFileToIPFS? | question documentation | I need to be able to pre-calculate the CID returned by `pinFileToIPFS` - is this possible? | 1.0 | Question: Can I pre-calculate the the CID returned by pinFileToIPFS? - I need to be able to pre-calculate the CID returned by `pinFileToIPFS` - is this possible? | non_priority | question can i pre calculate the the cid returned by pinfiletoipfs i need to be able to pre calculate the cid returned by pinfiletoipfs is this possible | 0 |
27,768 | 8,034,168,771 | IssuesEvent | 2018-07-29 15:32:04 | Krzmbrzl/SQDev | https://api.github.com/repos/Krzmbrzl/SQDev | closed | Error marker at wrong position | bug completed in Dev-Build | ```
if (isClass (configFile >> "CfgVehicles" >> _name)) {
}
_baseClass;
```
claims that there'd be a missing semicolon at `_name` whilst it is missing after the closing pranthesis (missing `then`) and at the closing curly bracket. | 1.0 | Error marker at wrong position - ```
if (isClass (configFile >> "CfgVehicles" >> _name)) {
}
_baseClass;
```
claims that there'd be a missing semicolon at `_name` whilst it is missing after the closing pranthesis (missing `then`) and at the closing curly bracket. | non_priority | error marker at wrong position if isclass configfile cfgvehicles name baseclass claims that there d be a missing semicolon at name whilst it is missing after the closing pranthesis missing then and at the closing curly bracket | 0 |
450,798 | 13,019,446,222 | IssuesEvent | 2020-07-26 22:40:25 | constellation-app/constellation | https://api.github.com/repos/constellation-app/constellation | opened | Copy/paste from Histogram | enhancement priority request user relevant ux | <!--
### Requirements
* Filling out the template is required. Any pull request that does not include
enough information to be reviewed in a timely manner may be closed at the
maintainers' discretion.
* Have you read Constellation's Code of Conduct? By filing an issue, you are
expected to comply with it, including treating everyone with respect:
https://github.com/constellation-app/constellation/blob/master/CODE_OF_CONDUCT.md
-->
### Prerequisites
* [ ] Put an X between the brackets on this line if you have done all of the following:
* Running the latest version of Constellation
* Attached the ***Support Package*** via `Help` > `Support Package`
* Checked the FAQs: https://github.com/constellation-app/constellation/wiki/FAQ
* Checked that your issue isn't already filed: https://github.com/constellation-app/constellation/issues
* Checked that there is not already a module that provides the described functionality: https://github.com/constellation-app/constellation/wiki/Catalogue-of-Repositories
### Description
Selecting property values in the Histogram selects the corresponding nodes or transactions on a graph but does not allow the user to copy those property values to the clipboard (using ctrl-c or right-click copy). Doing this at the moment requires the user to open the Table view, show the column for the desired property and copy from there. This is far more time-consuming than the ability to copy directly from the Histogram view.
### Steps to Reproduce
1. [First Step]
2. [Second Step]
3. [and so on...]
**Expected behaviour:** [What you expect to happen]
**Actual behaviour:** [What actually happens]
**Reproduces how often:** [What percentage of the time does it reproduce?]
### Additional Information
Any additional information, configuration or data that might be necessary to reproduce the issue.
| 1.0 | Copy/paste from Histogram - <!--
### Requirements
* Filling out the template is required. Any pull request that does not include
enough information to be reviewed in a timely manner may be closed at the
maintainers' discretion.
* Have you read Constellation's Code of Conduct? By filing an issue, you are
expected to comply with it, including treating everyone with respect:
https://github.com/constellation-app/constellation/blob/master/CODE_OF_CONDUCT.md
-->
### Prerequisites
* [ ] Put an X between the brackets on this line if you have done all of the following:
* Running the latest version of Constellation
* Attached the ***Support Package*** via `Help` > `Support Package`
* Checked the FAQs: https://github.com/constellation-app/constellation/wiki/FAQ
* Checked that your issue isn't already filed: https://github.com/constellation-app/constellation/issues
* Checked that there is not already a module that provides the described functionality: https://github.com/constellation-app/constellation/wiki/Catalogue-of-Repositories
### Description
Selecting property values in the Histogram selects the corresponding nodes or transactions on a graph but does not allow the user to copy those property values to the clipboard (using ctrl-c or right-click copy). Doing this at the moment requires the user to open the Table view, show the column for the desired property and copy from there. This is far more time-consuming than the ability to copy directly from the Histogram view.
### Steps to Reproduce
1. [First Step]
2. [Second Step]
3. [and so on...]
**Expected behaviour:** [What you expect to happen]
**Actual behaviour:** [What actually happens]
**Reproduces how often:** [What percentage of the time does it reproduce?]
### Additional Information
Any additional information, configuration or data that might be necessary to reproduce the issue.
| priority | copy paste from histogram requirements filling out the template is required any pull request that does not include enough information to be reviewed in a timely manner may be closed at the maintainers discretion have you read constellation s code of conduct by filing an issue you are expected to comply with it including treating everyone with respect prerequisites put an x between the brackets on this line if you have done all of the following running the latest version of constellation attached the support package via help support package checked the faqs checked that your issue isn t already filed checked that there is not already a module that provides the described functionality description selecting property values in the histogram selects the corresponding nodes or transactions on a graph but does not allow the user to copy those property values to the clipboard using ctrl c or right click copy doing this at the moment requires the user to open the table view show the column for the desired property and copy from there this is far more time consuming than the ability to copy directly from the histogram view steps to reproduce expected behaviour actual behaviour reproduces how often additional information any additional information configuration or data that might be necessary to reproduce the issue | 1 |
505,603 | 14,642,386,529 | IssuesEvent | 2020-12-25 10:51:50 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Message Improvement | feature: enhancement priority: medium | Some code refactoring required to fix some performance-related core logic
- With the Message thread ajax, We sending all recipients lists for every thread. it'll be a little heavy with a large network
> We should send recipients with some limit like 10 and we also need to send total recipients with thread objects.
- We need to provide separate ajax to get message thread recipients so we can support pagination.
- One major refactoring we can do is, we can move our message logic based on Rest endpoint instead of ajax, by this change we no need to maintain two chunks of code. | 1.0 | Message Improvement - Some code refactoring required to fix some performance-related core logic
- With the Message thread ajax, We sending all recipients lists for every thread. it'll be a little heavy with a large network
> We should send recipients with some limit like 10 and we also need to send total recipients with thread objects.
- We need to provide separate ajax to get message thread recipients so we can support pagination.
- One major refactoring we can do is, we can move our message logic based on Rest endpoint instead of ajax, by this change we no need to maintain two chunks of code. | priority | message improvement some code refactoring required to fix some performance related core logic with the message thread ajax we sending all recipients lists for every thread it ll be a little heavy with a large network we should send recipients with some limit like and we also need to send total recipients with thread objects we need to provide separate ajax to get message thread recipients so we can support pagination one major refactoring we can do is we can move our message logic based on rest endpoint instead of ajax by this change we no need to maintain two chunks of code | 1 |
94,221 | 27,148,962,122 | IssuesEvent | 2023-02-16 22:41:36 | dyninst/dyninst | https://api.github.com/repos/dyninst/dyninst | closed | Dockerfile spack.yaml property error | build | **Intention**
Building Dyninst in docker
**Bug & Reproduce**
```
$ docker build -t dyninst -f Dockerfile ../
[+] Building 196.3s (17/19)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04 0.5s
=> [internal] load build context 0.1s
=> => transferring context: 115.76kB 0.1s
=> CACHED [ 1/15] FROM docker.io/library/ubuntu:20.04@sha256:4a45212e9518f35983a976eead0de5eecc555a2f047134e9dd2cfc589076a00d 0.0s
=> [ 2/15] RUN apt-get -qq update && apt-get -qq install -fy tzdata && apt-get -qq install -y --no-install-recommends 109.5s
=> [ 3/15] RUN python3 -m pip install --upgrade pip && python3 -m pip install clingo && dpkg-reconfigure tzdata 5.0s
=> [ 4/15] RUN apt-get install -y software-properties-common && add-apt-repository -y ppa:ubuntu-toolchain-r/test && ap 59.2s
=> [ 5/15] WORKDIR /opt 0.0s
=> [ 6/15] RUN git clone --depth 1 https://github.com/spack/spack 9.3s
=> [ 7/15] RUN python3 -m pip install botocore boto3 5.2s
=> [ 8/15] RUN spack external find --not-buildable gcc autoconf bzip2 git tar xz perl cmake m4 ncurses && spack config add ' 2.8s
=> [ 9/15] RUN printf "\n boost:\n externals:\n - spec: boost@1.71.0\n prefix: /usr\n buildable: false\n intel-t 0.4s
=> [10/15] WORKDIR /code 0.0s
=> [11/15] COPY . /code 2.6s
=> [12/15] WORKDIR /opt/dyninst-env 0.0s
=> ERROR [13/15] RUN . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" 1.7s
------
> [13/15] RUN . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" >> spack.yaml && spack env activate . && spack add cmake@3.16.3 && spack add perl@5.30.0 && spack add boost@1.71.0 && spack add elfutils@0.186 && spack add libiberty@2.33.1+pic && spack add intel-tbb@2020.3 && spack install --reuse:
#17 1.304 ==> Created environment in /opt/dyninst-env
#17 1.304 ==> You can activate this environment with:
#17 1.304 ==> spack env activate /opt/dyninst-env
#17 1.629 ==> Error: /opt/dyninst-env/spack.yaml:7: Additional properties are not allowed ('concretization' was unexpected)
------
executor failed running [/bin/sh -c . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" >> spack.yaml && spack env activate . && spack add cmake@${CMAKE_VERSION} && spack add perl@${PERL_VERSION} && spack add boost@${BOOST_VERSION} && spack add elfutils@${ELFUTILS_VERSION} && spack add libiberty@${LIBIBERTY_VERSION}+pic && spack add intel-tbb@${INTELTBB_VERSION} && spack install --reuse]: exit code: 1
```
**System (please complete the following information):**
- platform: Docker version 20.10.17, build 100c701
- master branch | 1.0 | Dockerfile spack.yaml property error - **Intention**
Building Dyninst in docker
**Bug & Reproduce**
```
$ docker build -t dyninst -f Dockerfile ../
[+] Building 196.3s (17/19)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04 0.5s
=> [internal] load build context 0.1s
=> => transferring context: 115.76kB 0.1s
=> CACHED [ 1/15] FROM docker.io/library/ubuntu:20.04@sha256:4a45212e9518f35983a976eead0de5eecc555a2f047134e9dd2cfc589076a00d 0.0s
=> [ 2/15] RUN apt-get -qq update && apt-get -qq install -fy tzdata && apt-get -qq install -y --no-install-recommends 109.5s
=> [ 3/15] RUN python3 -m pip install --upgrade pip && python3 -m pip install clingo && dpkg-reconfigure tzdata 5.0s
=> [ 4/15] RUN apt-get install -y software-properties-common && add-apt-repository -y ppa:ubuntu-toolchain-r/test && ap 59.2s
=> [ 5/15] WORKDIR /opt 0.0s
=> [ 6/15] RUN git clone --depth 1 https://github.com/spack/spack 9.3s
=> [ 7/15] RUN python3 -m pip install botocore boto3 5.2s
=> [ 8/15] RUN spack external find --not-buildable gcc autoconf bzip2 git tar xz perl cmake m4 ncurses && spack config add ' 2.8s
=> [ 9/15] RUN printf "\n boost:\n externals:\n - spec: boost@1.71.0\n prefix: /usr\n buildable: false\n intel-t 0.4s
=> [10/15] WORKDIR /code 0.0s
=> [11/15] COPY . /code 2.6s
=> [12/15] WORKDIR /opt/dyninst-env 0.0s
=> ERROR [13/15] RUN . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" 1.7s
------
> [13/15] RUN . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" >> spack.yaml && spack env activate . && spack add cmake@3.16.3 && spack add perl@5.30.0 && spack add boost@1.71.0 && spack add elfutils@0.186 && spack add libiberty@2.33.1+pic && spack add intel-tbb@2020.3 && spack install --reuse:
#17 1.304 ==> Created environment in /opt/dyninst-env
#17 1.304 ==> You can activate this environment with:
#17 1.304 ==> spack env activate /opt/dyninst-env
#17 1.629 ==> Error: /opt/dyninst-env/spack.yaml:7: Additional properties are not allowed ('concretization' was unexpected)
------
executor failed running [/bin/sh -c . /opt/spack/share/spack/setup-env.sh && spack env create -d . && echo " concretization: together" >> spack.yaml && spack env activate . && spack add cmake@${CMAKE_VERSION} && spack add perl@${PERL_VERSION} && spack add boost@${BOOST_VERSION} && spack add elfutils@${ELFUTILS_VERSION} && spack add libiberty@${LIBIBERTY_VERSION}+pic && spack add intel-tbb@${INTELTBB_VERSION} && spack install --reuse]: exit code: 1
```
**System (please complete the following information):**
- platform: Docker version 20.10.17, build 100c701
- master branch | non_priority | dockerfile spack yaml property error intention building dyninst in docker bug reproduce docker build t dyninst f dockerfile building load build definition from dockerfile transferring dockerfile load dockerignore transferring context load metadata for docker io library ubuntu load build context transferring context cached from docker io library ubuntu run apt get qq update apt get qq install fy tzdata apt get qq install y no install recommends run m pip install upgrade pip m pip install clingo dpkg reconfigure tzdata run apt get install y software properties common add apt repository y ppa ubuntu toolchain r test ap workdir opt run git clone depth run m pip install botocore run spack external find not buildable gcc autoconf git tar xz perl cmake ncurses spack config add run printf n boost n externals n spec boost n prefix usr n buildable false n intel t workdir code copy code workdir opt dyninst env error run opt spack share spack setup env sh spack env create d echo concretization together run opt spack share spack setup env sh spack env create d echo concretization together spack yaml spack env activate spack add cmake spack add perl spack add boost spack add elfutils spack add libiberty pic spack add intel tbb spack install reuse created environment in opt dyninst env you can activate this environment with spack env activate opt dyninst env error opt dyninst env spack yaml additional properties are not allowed concretization was unexpected executor failed running exit code system please complete the following information platform docker version build master branch | 0 |
258,732 | 8,179,346,429 | IssuesEvent | 2018-08-28 16:06:19 | melsicon/melsicon.de | https://api.github.com/repos/melsicon/melsicon.de | opened | Space around contact section | Priority: Low Type: Enhancement | ### **Type of Issue**
- [ ] Bug
- [X] Enhancement
- [ ] Documentation
- [ ] Maintenance
- [ ] Question
#### Priority
- [X] Low
- [ ] Medium
- [ ] High
- [ ] Critical
## **Description**
The space around the contact section is too cramped
## **Steps to reproduce**
Scroll to contact section
## **Possible Solution**
Add some padding or margin to .section-contact
| 1.0 | Space around contact section - ### **Type of Issue**
- [ ] Bug
- [X] Enhancement
- [ ] Documentation
- [ ] Maintenance
- [ ] Question
#### Priority
- [X] Low
- [ ] Medium
- [ ] High
- [ ] Critical
## **Description**
The space around the contact section is too cramped
## **Steps to reproduce**
Scroll to contact section
## **Possible Solution**
Add some padding or margin to .section-contact
| priority | space around contact section type of issue bug enhancement documentation maintenance question priority low medium high critical description the space around the contact section is too cramped steps to reproduce scroll to contact section possible solution add some padding or margin to section contact | 1 |
330,121 | 10,029,586,574 | IssuesEvent | 2019-07-17 14:12:53 | bibaev/code-completion-evaluation | https://api.github.com/repos/bibaev/code-completion-evaluation | closed | Добавлять диагностику в отчет, если что-то пошло не так | effort: medium priority: low type: bug | Сейчас если запустить с выключенными докером, то получится такое:

При этом в лог валятся два ислючения. Проблема в том, что сейчас в отчете нет информации о том, на каких файлах мы запустили, и что на некоторыз файлах ничего не получилось, а происхошла ошибка.
Более правильно добавить файл с ошибками в отчет, но сделать гиперссылку красной, либо добавить еще одну кнопку `errors`, которая откроет страницу со стек трейсом исключения.
Также можно писать в начале отчета короткий summary вида `No errors occurred`, `1 error(s) happened`.
Пример с докером выдуман, но там могут происходить и другие ошибки при построении UAST :) | 1.0 | Добавлять диагностику в отчет, если что-то пошло не так - Сейчас если запустить с выключенными докером, то получится такое:

При этом в лог валятся два ислючения. Проблема в том, что сейчас в отчете нет информации о том, на каких файлах мы запустили, и что на некоторыз файлах ничего не получилось, а происхошла ошибка.
Более правильно добавить файл с ошибками в отчет, но сделать гиперссылку красной, либо добавить еще одну кнопку `errors`, которая откроет страницу со стек трейсом исключения.
Также можно писать в начале отчета короткий summary вида `No errors occurred`, `1 error(s) happened`.
Пример с докером выдуман, но там могут происходить и другие ошибки при построении UAST :) | priority | добавлять диагностику в отчет если что то пошло не так сейчас если запустить с выключенными докером то получится такое при этом в лог валятся два ислючения проблема в том что сейчас в отчете нет информации о том на каких файлах мы запустили и что на некоторыз файлах ничего не получилось а происхошла ошибка более правильно добавить файл с ошибками в отчет но сделать гиперссылку красной либо добавить еще одну кнопку errors которая откроет страницу со стек трейсом исключения также можно писать в начале отчета короткий summary вида no errors occurred error s happened пример с докером выдуман но там могут происходить и другие ошибки при построении uast | 1 |
9,360 | 11,411,253,941 | IssuesEvent | 2020-02-01 04:09:26 | ballerina-platform/ballerina-spec | https://api.github.com/repos/ballerina-platform/ballerina-spec | closed | New approach to identity for XML | enhancement incompatible | Two XML sequences are `===` if they have the same length and their constituent items are pairwise `===`. Two text XML items are `===` if they are `==.` Other types of XML item are `===` if they have the same storage location. | True | New approach to identity for XML - Two XML sequences are `===` if they have the same length and their constituent items are pairwise `===`. Two text XML items are `===` if they are `==.` Other types of XML item are `===` if they have the same storage location. | non_priority | new approach to identity for xml two xml sequences are if they have the same length and their constituent items are pairwise two text xml items are if they are other types of xml item are if they have the same storage location | 0 |
436,489 | 12,550,735,643 | IssuesEvent | 2020-06-06 12:15:51 | googleapis/elixir-google-api | https://api.github.com/repos/googleapis/elixir-google-api | opened | Synthesis failed for BigQueryConnection | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate BigQueryConnection. :broken_heart:
Here's the output from running `synth.py`:
```
2020-06-06 05:15:08,130 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-06-06 05:15:09,292 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-06-06 05:15:09,296 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-06-06 05:15:09,301 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-06-06 05:15:09,304 autosynth [DEBUG] > Running: git config push.default simple
2020-06-06 05:15:09,308 autosynth [DEBUG] > Running: git branch -f autosynth-bigqueryconnection
2020-06-06 05:15:09,311 autosynth [DEBUG] > Running: git checkout autosynth-bigqueryconnection
Switched to branch 'autosynth-bigqueryconnection'
2020-06-06 05:15:09,683 autosynth [INFO] > Running synthtool
2020-06-06 05:15:09,683 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/big_query_connection/synth.metadata', 'synth.py', '--']
2020-06-06 05:15:09,685 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/big_query_connection/synth.metadata synth.py -- BigQueryConnection
tee: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api: Is a directory
2020-06-06 05:15:09,919 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-bigqueryconnection
nothing to commit, working tree clean
2020-06-06 05:15:11,821 synthtool [DEBUG] > Running: docker run --rm -v/tmpfs/tmp/tmpvfens2xc/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh BigQueryConnection
DEBUG:synthtool:Running: docker run --rm -v/tmpfs/tmp/tmpvfens2xc/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh BigQueryConnection
/workspace /workspace
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
Resolving Hex dependencies...
Dependency resolution completed:
Unchanged:
certifi 2.5.1
google_api_discovery 0.7.0
google_gax 0.3.2
hackney 1.15.2
idna 6.0.0
jason 1.2.1
metrics 1.0.1
mime 1.3.1
mimerl 1.2.0
oauth2 0.9.4
parse_trans 3.3.0
poison 3.1.0
ssl_verify_fun 1.1.5
temp 0.4.7
tesla 1.3.3
unicode_util_compat 0.4.1
* Getting google_api_discovery (Hex package)
* Getting tesla (Hex package)
* Getting oauth2 (Hex package)
* Getting temp (Hex package)
* Getting jason (Hex package)
* Getting poison (Hex package)
* Getting hackney (Hex package)
* Getting certifi (Hex package)
* Getting idna (Hex package)
* Getting metrics (Hex package)
* Getting mimerl (Hex package)
* Getting ssl_verify_fun (Hex package)
* Getting unicode_util_compat (Hex package)
* Getting parse_trans (Hex package)
* Getting mime (Hex package)
* Getting google_gax (Hex package)
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
==> temp
Compiling 3 files (.ex)
Generated temp app
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> jason
Compiling 8 files (.ex)
Generated jason app
warning: String.strip/1 is deprecated. Use String.trim/1 instead
/workspace/deps/poison/mix.exs:4
==> poison
Compiling 4 files (.ex)
warning: Integer.to_char_list/2 is deprecated. Use Integer.to_charlist/2 instead
lib/poison/encoder.ex:173
Generated poison app
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> oauth2
Compiling 13 files (.ex)
Generated oauth2 app
==> mime
Compiling 2 files (.ex)
Generated mime app
==> tesla
Compiling 26 files (.ex)
Generated tesla app
==> google_gax
Compiling 5 files (.ex)
Generated google_gax app
==> google_api_discovery
Compiling 21 files (.ex)
Generated google_api_discovery app
==> google_apis
Compiling 27 files (.ex)
warning: System.cwd/0 is deprecated. Use File.cwd/0 instead
lib/google_apis/publisher.ex:24
Generated google_apis app
12:15:46.152 [info] FETCHING: https://bigqueryconnection.googleapis.com/$discovery/GOOGLE_REST_SIMPLE_URI?version=v1beta1
12:15:46.292 [info] FETCHING: https://bigqueryconnection.googleapis.com/$discovery/rest?version=v1beta1
12:15:46.299 [info] FOUND: https://bigqueryconnection.googleapis.com/$discovery/rest?version=v1beta1
Revision check: old=20200430, new=20200528, generating=true
Creating leading directories
Writing AuditConfig to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/audit_config.ex.
Writing AuditLogConfig to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/audit_log_config.ex.
Writing Binding to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/binding.ex.
Writing CloudSqlCredential to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/cloud_sql_credential.ex.
Writing CloudSqlProperties to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/cloud_sql_properties.ex.
Writing Connection to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/connection.ex.
Writing ConnectionCredential to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/connection_credential.ex.
Writing Empty to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/empty.ex.
Writing Expr to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/expr.ex.
Writing GetIamPolicyRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/get_iam_policy_request.ex.
Writing GetPolicyOptions to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/get_policy_options.ex.
Writing ListConnectionsResponse to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/list_connections_response.ex.
Writing Policy to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/policy.ex.
Writing SetIamPolicyRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/set_iam_policy_request.ex.
Writing TestIamPermissionsRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/test_iam_permissions_request.ex.
Writing TestIamPermissionsResponse to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/test_iam_permissions_response.ex.
Writing Projects to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/api/projects.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
12:15:46.721 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 05:15:50,001 synthtool [DEBUG] > Wrote metadata to clients/big_query_connection/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/big_query_connection/synth.metadata.
2020-06-06 05:15:50,034 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
| 1.0 | Synthesis failed for BigQueryConnection - Hello! Autosynth couldn't regenerate BigQueryConnection. :broken_heart:
Here's the output from running `synth.py`:
```
2020-06-06 05:15:08,130 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-06-06 05:15:09,292 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-06-06 05:15:09,296 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-06-06 05:15:09,301 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-06-06 05:15:09,304 autosynth [DEBUG] > Running: git config push.default simple
2020-06-06 05:15:09,308 autosynth [DEBUG] > Running: git branch -f autosynth-bigqueryconnection
2020-06-06 05:15:09,311 autosynth [DEBUG] > Running: git checkout autosynth-bigqueryconnection
Switched to branch 'autosynth-bigqueryconnection'
2020-06-06 05:15:09,683 autosynth [INFO] > Running synthtool
2020-06-06 05:15:09,683 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/big_query_connection/synth.metadata', 'synth.py', '--']
2020-06-06 05:15:09,685 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/big_query_connection/synth.metadata synth.py -- BigQueryConnection
tee: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api: Is a directory
2020-06-06 05:15:09,919 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-bigqueryconnection
nothing to commit, working tree clean
2020-06-06 05:15:11,821 synthtool [DEBUG] > Running: docker run --rm -v/tmpfs/tmp/tmpvfens2xc/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh BigQueryConnection
DEBUG:synthtool:Running: docker run --rm -v/tmpfs/tmp/tmpvfens2xc/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh BigQueryConnection
/workspace /workspace
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
Resolving Hex dependencies...
Dependency resolution completed:
Unchanged:
certifi 2.5.1
google_api_discovery 0.7.0
google_gax 0.3.2
hackney 1.15.2
idna 6.0.0
jason 1.2.1
metrics 1.0.1
mime 1.3.1
mimerl 1.2.0
oauth2 0.9.4
parse_trans 3.3.0
poison 3.1.0
ssl_verify_fun 1.1.5
temp 0.4.7
tesla 1.3.3
unicode_util_compat 0.4.1
* Getting google_api_discovery (Hex package)
* Getting tesla (Hex package)
* Getting oauth2 (Hex package)
* Getting temp (Hex package)
* Getting jason (Hex package)
* Getting poison (Hex package)
* Getting hackney (Hex package)
* Getting certifi (Hex package)
* Getting idna (Hex package)
* Getting metrics (Hex package)
* Getting mimerl (Hex package)
* Getting ssl_verify_fun (Hex package)
* Getting unicode_util_compat (Hex package)
* Getting parse_trans (Hex package)
* Getting mime (Hex package)
* Getting google_gax (Hex package)
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
==> temp
Compiling 3 files (.ex)
Generated temp app
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> jason
Compiling 8 files (.ex)
Generated jason app
warning: String.strip/1 is deprecated. Use String.trim/1 instead
/workspace/deps/poison/mix.exs:4
==> poison
Compiling 4 files (.ex)
warning: Integer.to_char_list/2 is deprecated. Use Integer.to_charlist/2 instead
lib/poison/encoder.ex:173
Generated poison app
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> oauth2
Compiling 13 files (.ex)
Generated oauth2 app
==> mime
Compiling 2 files (.ex)
Generated mime app
==> tesla
Compiling 26 files (.ex)
Generated tesla app
==> google_gax
Compiling 5 files (.ex)
Generated google_gax app
==> google_api_discovery
Compiling 21 files (.ex)
Generated google_api_discovery app
==> google_apis
Compiling 27 files (.ex)
warning: System.cwd/0 is deprecated. Use File.cwd/0 instead
lib/google_apis/publisher.ex:24
Generated google_apis app
12:15:46.152 [info] FETCHING: https://bigqueryconnection.googleapis.com/$discovery/GOOGLE_REST_SIMPLE_URI?version=v1beta1
12:15:46.292 [info] FETCHING: https://bigqueryconnection.googleapis.com/$discovery/rest?version=v1beta1
12:15:46.299 [info] FOUND: https://bigqueryconnection.googleapis.com/$discovery/rest?version=v1beta1
Revision check: old=20200430, new=20200528, generating=true
Creating leading directories
Writing AuditConfig to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/audit_config.ex.
Writing AuditLogConfig to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/audit_log_config.ex.
Writing Binding to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/binding.ex.
Writing CloudSqlCredential to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/cloud_sql_credential.ex.
Writing CloudSqlProperties to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/cloud_sql_properties.ex.
Writing Connection to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/connection.ex.
Writing ConnectionCredential to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/connection_credential.ex.
Writing Empty to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/empty.ex.
Writing Expr to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/expr.ex.
Writing GetIamPolicyRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/get_iam_policy_request.ex.
Writing GetPolicyOptions to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/get_policy_options.ex.
Writing ListConnectionsResponse to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/list_connections_response.ex.
Writing Policy to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/policy.ex.
Writing SetIamPolicyRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/set_iam_policy_request.ex.
Writing TestIamPermissionsRequest to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/test_iam_permissions_request.ex.
Writing TestIamPermissionsResponse to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/model/test_iam_permissions_response.ex.
Writing Projects to clients/big_query_connection/lib/google_api/big_query_connection/v1beta1/api/projects.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
12:15:46.721 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 05:15:50,001 synthtool [DEBUG] > Wrote metadata to clients/big_query_connection/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/big_query_connection/synth.metadata.
2020-06-06 05:15:50,034 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
| priority | synthesis failed for bigqueryconnection hello autosynth couldn t regenerate bigqueryconnection broken heart here s the output from running synth py autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth running git config global core excludesfile home kbuilder autosynth gitignore autosynth running git config user name yoshi automation autosynth running git config user email yoshi automation google com autosynth running git config push default simple autosynth running git branch f autosynth bigqueryconnection autosynth running git checkout autosynth bigqueryconnection switched to branch autosynth bigqueryconnection autosynth running synthtool autosynth autosynth running tmpfs src github synthtool env bin m synthtool metadata clients big query connection synth metadata synth py bigqueryconnection tee tmpfs src github synthtool logs googleapis elixir google api is a directory synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth bigqueryconnection nothing to commit working tree clean synthtool running docker run rm v tmpfs tmp repo workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh bigqueryconnection debug synthtool running docker run rm v tmpfs tmp repo workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh bigqueryconnection workspace workspace mix lock file was generated with a newer version of hex update your client by running mix local hex to avoid losing data resolving hex dependencies dependency resolution completed unchanged certifi google api discovery google gax hackney idna jason metrics mime mimerl parse trans poison ssl verify fun temp tesla unicode util compat getting google api discovery hex package getting tesla hex package getting hex package getting temp hex package getting jason hex package getting poison hex package getting hackney hex package getting certifi hex package getting idna hex package getting metrics hex package getting mimerl hex package getting ssl verify fun hex package getting unicode util compat hex package getting parse trans hex package getting mime hex package getting google gax hex package mix lock file was generated with a newer version of hex update your client by running mix local hex to avoid losing data temp compiling files ex generated temp app compiling parse trans compiling mimerl compiling metrics compiling unicode util compat compiling idna jason compiling files ex generated jason app warning string strip is deprecated use string trim instead workspace deps poison mix exs poison compiling files ex warning integer to char list is deprecated use integer to charlist instead lib poison encoder ex generated poison app ssl verify fun compiling files erl generated ssl verify fun app compiling certifi compiling hackney compiling files ex generated app mime compiling files ex generated mime app tesla compiling files ex generated tesla app google gax compiling files ex generated google gax app google api discovery compiling files ex generated google api discovery app google apis compiling files ex warning system cwd is deprecated use file cwd instead lib google apis publisher ex generated google apis app fetching fetching found revision check old new generating true creating leading directories writing auditconfig to clients big query connection lib google api big query connection model audit config ex writing auditlogconfig to clients big query connection lib google api big query connection model audit log config ex writing binding to clients big query connection lib google api big query connection model binding ex writing cloudsqlcredential to clients big query connection lib google api big query connection model cloud sql credential ex writing cloudsqlproperties to clients big query connection lib google api big query connection model cloud sql properties ex writing connection to clients big query connection lib google api big query connection model connection ex writing connectioncredential to clients big query connection lib google api big query connection model connection credential ex writing empty to clients big query connection lib google api big query connection model empty ex writing expr to clients big query connection lib google api big query connection model expr ex writing getiampolicyrequest to clients big query connection lib google api big query connection model get iam policy request ex writing getpolicyoptions to clients big query connection lib google api big query connection model get policy options ex writing listconnectionsresponse to clients big query connection lib google api big query connection model list connections response ex writing policy to clients big query connection lib google api big query connection model policy ex writing setiampolicyrequest to clients big query connection lib google api big query connection model set iam policy request ex writing testiampermissionsrequest to clients big query connection lib google api big query connection model test iam permissions request ex writing testiampermissionsresponse to clients big query connection lib google api big query connection model test iam permissions response ex writing projects to clients big query connection lib google api big query connection api projects ex writing connection ex writing metadata ex writing mix exs writing readme md writing license writing gitignore writing config config exs writing test test helper exs found only discovery revision and or formatting changes not significant enough for a pr fixing file permissions synthtool wrote metadata to clients big query connection synth metadata debug synthtool wrote metadata to clients big query connection synth metadata autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize with open log file path rt as fp isadirectoryerror is a directory tmpfs src github synthtool logs googleapis elixir google api google internal developers can see the full log | 1 |
799,727 | 28,312,701,138 | IssuesEvent | 2023-04-10 16:49:52 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | Slow Rock Startup | Status: Confirmed Priority: High Fixed in v15.0 Fixed in v14.2 | ### Please go through all the tasks below
- [ ] Check this box only after you have successfully completed both the above tasks
### Please provide a brief description of the problem. Please do not forget to attach the relevant screenshots from your side.
We're seeing significant performance issues in Rock V14 during startup that are very painful. Prior to V14, a Rock restart typically took less than a minute. Now it is usually more than ten minutes and closer to fifteen in production (and worse in our sandboxes and development environments).
The problem seems to originate with this commit: <https://github.com/SparkDevNetwork/Rock/commit/63c4f0d103976640fae3b20c029a7c180d12c8d7>
Below is a CPU Usage Call Tree from Visual Studio using data captured during this prolonged startup. (These numbers are probably worse than production because of the reduced resources of my development environment and the increased latency between my box and the production SQL server, but the signature is consistent with what we see in production.)
You can see that most of the time is attributable to loading entity cache (rows 44 and 45). Working back up the tree, this seems to be drivin by an AttributeCache.All() method (row 34) that is endeavoring to load _all_ attributes into cache. That method is called from AttributeCache.GetAttributePropertyDependencies() (row 33), a new method introduced in this commit.
Working further up the call stack, this is driven by AttributeCache.GetDirtyAttributeIdsForPropertyChange() (row 28), another new method in this commit, which is being called from DbContext.RockPostSave() (Row 27) and ultimately from Application_Start() (row 22) by way of EntityTypeService.RegisterEntityTypes() (row 23).
I believe that prior to this change, AttributeCache relied on a lazy load strategy. If it isn't absolutely necessary to preload that cache during startup, we'd really like to see this move back in that direction.
| Row | Function Name | Total CPU [unit, %] | Self CPU [unit, %] | Module | Category |
| --- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ------------------ | ----------------- | ----------------- |
| 1 | + iisexpress (PID: 788) | 72334 (100.00%) | 960 (1.33%) | Multiple modules | |
| 2 | \- + [External Call] | | | | |
| 3 | (IIS internals omitted for brevity) | | | | |
| 20 | \------------------ + [External Call] | | | | |
| 21 | webengine4.dll!0x00007fffc30e543a | 42530 (58.80%) | 0 (0.00%) | webengine4 | Database |
| 22 | \------------------- + RockWeb.Global.Application_Start(object, System.EventArgs) | 42530 (58.80%) | 0 (0.00%) | app_code.ftwypmst | Database |
| 23 | \-------------------- + Rock.Model.EntityTypeService.RegisterEntityTypes() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 24 | \--------------------- + Rock.Data.DbContext.SaveChanges() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 25 | \---------------------- + Rock.Data.DbContext.SaveChanges(bool) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 26 | \----------------------- + Rock.Data.DbContext.SaveChanges(Rock.Data.SaveChangesArgs) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 27 | \------------------------ + Rock.Data.DbContext.RockPostSave(System.Collections.Generic.List<ContextItem>, Rock.Model.PersonAlias, bool) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 28 | \------------------------- + Rock.Web.Cache.AttributeCache.GetDirtyAttributeIdsForPropertyChange(int, System.Func<System.Collections.Generic.IReadOnlyList<string>>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 29 | \-------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, System.Func<object>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 30 | \--------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, string, System.Func<object>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 31 | \---------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, string, System.Func<object>, System.TimeSpan) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 32 | \----------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(RockCacheGetOrAddExistingArgs) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 33 | \------------------------------ + Rock.Web.Cache.AttributeCache.GetAttributePropertyDependencies() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 34 | \------------------------------- + Rock.Web.Cache.AttributeCache.All() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 35 | \-------------------------------- + Rock.Web.Cache.AttributeCache.All(Rock.Data.RockContext) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 36 | \--------------------------------- + Rock.Web.Cache.EntityCache<T, T>.All(Rock.Data.RockContext) | 42530 (58.80%) | 11 (0.02%) | rock | Database |
| 37 | \---------------------------------- + Rock.Web.Cache.EntityCache<T, T>.Get(int, Rock.Data.RockContext) | 42468 (58.71%) | 0 (0.00%) | rock | Database |
| 38 | \----------------------------------- + Rock.Web.Cache.EntityItemCache<T>.GetOrAddExisting(int, System.Func<T>) | 42465 (58.71%) | 1 (0.00%) | rock | Database |
| 39 | \------------------------------------ + Rock.Web.Cache.ItemCache<T>.GetOrAddExisting(int, System.Func<T>, System.Func<System.Collections.Generic.List<string>>) | 42464 (58.71%) | 3 (0.00%) | rock | Database |
| 40 | \------------------------------------- + Rock.Web.Cache.ItemCache<T>.GetOrAddExisting(string, System.Func<T>, System.Func<System.Collections.Generic.List<string>>) | 42447 (58.68%) | 1 (0.00%) | rock | Database |
| 41 | \-------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.Get.AnonymousMethod__0() | 39540 (54.66%) | 5 (0.01%) | rock | Database |
| 42 | \--------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.QueryDb(int, Rock.Data.DbContext) | 39322 (54.36%) | 2 (0.00%) | rock | Database |
| 43 | \---------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.QueryDbWithContext(int, Rock.Data.DbContext) | 33664 (46.54%) | 8 (0.01%) | rock | Database |
| 44 | \----------------------------------------- - Rock.Data.Service<T>.Get(int) | 17307 (23.93%) | 7 (0.01%) | rock | Database |
| 45 | \----------------------------------------- - Rock.Web.Cache.AttributeCache.SetFromEntity(Rock.Data.IEntity) | 16245 (22.46%) | 13 (0.02%) | rock | Database |
### Expected Behavior
Rock IIS server should start up in approximately one minute (on par with previous version).
### Actual Behavior
Rock IIS server takes ten to fifteen minutes to start.
### Steps to Reproduce
Restart Rock (V14).
### Rock Version
14.2
### Client Culture Setting
en-US | 1.0 | Slow Rock Startup - ### Please go through all the tasks below
- [ ] Check this box only after you have successfully completed both the above tasks
### Please provide a brief description of the problem. Please do not forget to attach the relevant screenshots from your side.
We're seeing significant performance issues in Rock V14 during startup that are very painful. Prior to V14, a Rock restart typically took less than a minute. Now it is usually more than ten minutes and closer to fifteen in production (and worse in our sandboxes and development environments).
The problem seems to originate with this commit: <https://github.com/SparkDevNetwork/Rock/commit/63c4f0d103976640fae3b20c029a7c180d12c8d7>
Below is a CPU Usage Call Tree from Visual Studio using data captured during this prolonged startup. (These numbers are probably worse than production because of the reduced resources of my development environment and the increased latency between my box and the production SQL server, but the signature is consistent with what we see in production.)
You can see that most of the time is attributable to loading entity cache (rows 44 and 45). Working back up the tree, this seems to be drivin by an AttributeCache.All() method (row 34) that is endeavoring to load _all_ attributes into cache. That method is called from AttributeCache.GetAttributePropertyDependencies() (row 33), a new method introduced in this commit.
Working further up the call stack, this is driven by AttributeCache.GetDirtyAttributeIdsForPropertyChange() (row 28), another new method in this commit, which is being called from DbContext.RockPostSave() (Row 27) and ultimately from Application_Start() (row 22) by way of EntityTypeService.RegisterEntityTypes() (row 23).
I believe that prior to this change, AttributeCache relied on a lazy load strategy. If it isn't absolutely necessary to preload that cache during startup, we'd really like to see this move back in that direction.
| Row | Function Name | Total CPU [unit, %] | Self CPU [unit, %] | Module | Category |
| --- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ------------------ | ----------------- | ----------------- |
| 1 | + iisexpress (PID: 788) | 72334 (100.00%) | 960 (1.33%) | Multiple modules | |
| 2 | \- + [External Call] | | | | |
| 3 | (IIS internals omitted for brevity) | | | | |
| 20 | \------------------ + [External Call] | | | | |
| 21 | webengine4.dll!0x00007fffc30e543a | 42530 (58.80%) | 0 (0.00%) | webengine4 | Database |
| 22 | \------------------- + RockWeb.Global.Application_Start(object, System.EventArgs) | 42530 (58.80%) | 0 (0.00%) | app_code.ftwypmst | Database |
| 23 | \-------------------- + Rock.Model.EntityTypeService.RegisterEntityTypes() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 24 | \--------------------- + Rock.Data.DbContext.SaveChanges() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 25 | \---------------------- + Rock.Data.DbContext.SaveChanges(bool) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 26 | \----------------------- + Rock.Data.DbContext.SaveChanges(Rock.Data.SaveChangesArgs) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 27 | \------------------------ + Rock.Data.DbContext.RockPostSave(System.Collections.Generic.List<ContextItem>, Rock.Model.PersonAlias, bool) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 28 | \------------------------- + Rock.Web.Cache.AttributeCache.GetDirtyAttributeIdsForPropertyChange(int, System.Func<System.Collections.Generic.IReadOnlyList<string>>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 29 | \-------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, System.Func<object>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 30 | \--------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, string, System.Func<object>) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 31 | \---------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(string, string, System.Func<object>, System.TimeSpan) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 32 | \----------------------------- + Rock.Web.Cache.RockCache.GetOrAddExisting(RockCacheGetOrAddExistingArgs) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 33 | \------------------------------ + Rock.Web.Cache.AttributeCache.GetAttributePropertyDependencies() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 34 | \------------------------------- + Rock.Web.Cache.AttributeCache.All() | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 35 | \-------------------------------- + Rock.Web.Cache.AttributeCache.All(Rock.Data.RockContext) | 42530 (58.80%) | 0 (0.00%) | rock | Database |
| 36 | \--------------------------------- + Rock.Web.Cache.EntityCache<T, T>.All(Rock.Data.RockContext) | 42530 (58.80%) | 11 (0.02%) | rock | Database |
| 37 | \---------------------------------- + Rock.Web.Cache.EntityCache<T, T>.Get(int, Rock.Data.RockContext) | 42468 (58.71%) | 0 (0.00%) | rock | Database |
| 38 | \----------------------------------- + Rock.Web.Cache.EntityItemCache<T>.GetOrAddExisting(int, System.Func<T>) | 42465 (58.71%) | 1 (0.00%) | rock | Database |
| 39 | \------------------------------------ + Rock.Web.Cache.ItemCache<T>.GetOrAddExisting(int, System.Func<T>, System.Func<System.Collections.Generic.List<string>>) | 42464 (58.71%) | 3 (0.00%) | rock | Database |
| 40 | \------------------------------------- + Rock.Web.Cache.ItemCache<T>.GetOrAddExisting(string, System.Func<T>, System.Func<System.Collections.Generic.List<string>>) | 42447 (58.68%) | 1 (0.00%) | rock | Database |
| 41 | \-------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.Get.AnonymousMethod__0() | 39540 (54.66%) | 5 (0.01%) | rock | Database |
| 42 | \--------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.QueryDb(int, Rock.Data.DbContext) | 39322 (54.36%) | 2 (0.00%) | rock | Database |
| 43 | \---------------------------------------- + Rock.Web.Cache.EntityCache<T, T>.QueryDbWithContext(int, Rock.Data.DbContext) | 33664 (46.54%) | 8 (0.01%) | rock | Database |
| 44 | \----------------------------------------- - Rock.Data.Service<T>.Get(int) | 17307 (23.93%) | 7 (0.01%) | rock | Database |
| 45 | \----------------------------------------- - Rock.Web.Cache.AttributeCache.SetFromEntity(Rock.Data.IEntity) | 16245 (22.46%) | 13 (0.02%) | rock | Database |
### Expected Behavior
Rock IIS server should start up in approximately one minute (on par with previous version).
### Actual Behavior
Rock IIS server takes ten to fifteen minutes to start.
### Steps to Reproduce
Restart Rock (V14).
### Rock Version
14.2
### Client Culture Setting
en-US | priority | slow rock startup please go through all the tasks below check this box only after you have successfully completed both the above tasks please provide a brief description of the problem please do not forget to attach the relevant screenshots from your side we re seeing significant performance issues in rock during startup that are very painful prior to a rock restart typically took less than a minute now it is usually more than ten minutes and closer to fifteen in production and worse in our sandboxes and development environments the problem seems to originate with this commit below is a cpu usage call tree from visual studio using data captured during this prolonged startup these numbers are probably worse than production because of the reduced resources of my development environment and the increased latency between my box and the production sql server but the signature is consistent with what we see in production you can see that most of the time is attributable to loading entity cache rows and working back up the tree this seems to be drivin by an attributecache all method row that is endeavoring to load all attributes into cache that method is called from attributecache getattributepropertydependencies row a new method introduced in this commit working further up the call stack this is driven by attributecache getdirtyattributeidsforpropertychange row another new method in this commit which is being called from dbcontext rockpostsave row and ultimately from application start row by way of entitytypeservice registerentitytypes row i believe that prior to this change attributecache relied on a lazy load strategy if it isn t absolutely necessary to preload that cache during startup we d really like to see this move back in that direction row function name total cpu self cpu module category iisexpress pid multiple modules iis internals omitted for brevity dll database rockweb global application start object system eventargs app code ftwypmst database rock model entitytypeservice registerentitytypes rock database rock data dbcontext savechanges rock database rock data dbcontext savechanges bool rock database rock data dbcontext savechanges rock data savechangesargs rock database rock data dbcontext rockpostsave system collections generic list rock model personalias bool rock database rock web cache attributecache getdirtyattributeidsforpropertychange int system func rock database rock web cache rockcache getoraddexisting string system func rock database rock web cache rockcache getoraddexisting string string system func rock database rock web cache rockcache getoraddexisting string string system func system timespan rock database rock web cache rockcache getoraddexisting rockcachegetoraddexistingargs rock database rock web cache attributecache getattributepropertydependencies rock database rock web cache attributecache all rock database rock web cache attributecache all rock data rockcontext rock database rock web cache entitycache all rock data rockcontext rock database rock web cache entitycache get int rock data rockcontext rock database rock web cache entityitemcache getoraddexisting int system func rock database rock web cache itemcache getoraddexisting int system func system func rock database rock web cache itemcache getoraddexisting string system func system func rock database rock web cache entitycache get anonymousmethod rock database rock web cache entitycache querydb int rock data dbcontext rock database rock web cache entitycache querydbwithcontext int rock data dbcontext rock database rock data service get int rock database rock web cache attributecache setfromentity rock data ientity rock database expected behavior rock iis server should start up in approximately one minute on par with previous version actual behavior rock iis server takes ten to fifteen minutes to start steps to reproduce restart rock rock version client culture setting en us | 1 |
702,557 | 24,125,741,882 | IssuesEvent | 2022-09-21 00:03:58 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Allow custom profile fields to be shown in profile card | help wanted area: settings (admin/org) priority: high release goal area: popovers | At present, a user's profile card displays their name, local time, and role.

Organizations may find it useful to display additional fields there, such as pronouns (#21214), GitHub username, job title, team, etc., and we should make it possible to do so.
## In Organization settings > Custom profile fields
1. All field types other than "Long text" or "Person" should have a checkbox option (shown below the hint) to "Display in profile summary" (final wording TBD).
2. We should add a column of checkboxes (just to the left of "Actions") labeled "Display" that also controls this feature.
To make sure the short profile looks good, we should limit the number of fields that can be displayed at a time. We can try a max of 2 to start with. If the maximum number of fields is already selected, all unselected checkboxes should be disabled, with a tooltip that says: "At most 2 custom profile fields can be displayed."
## In the short profile
We should display the fields in the following order:
1. Name
2. Role
3. Custom field 1
4. Custom field 2
4. Local time
The order of the profile fields in the short profile should match the order in the settings.
To save space, we should try just showing the content of the profile field, with the title of the profile field displayed in a tooltip on hover.
If the user leaves the field blank, we should not show a blank line for it.
----
This issue also solves #21214.
[CZO discussion thread](https://chat.zulip.org/#narrow/stream/101-design/topic/pronouns.20in.20profile.20card) | 1.0 | Allow custom profile fields to be shown in profile card - At present, a user's profile card displays their name, local time, and role.

Organizations may find it useful to display additional fields there, such as pronouns (#21214), GitHub username, job title, team, etc., and we should make it possible to do so.
## In Organization settings > Custom profile fields
1. All field types other than "Long text" or "Person" should have a checkbox option (shown below the hint) to "Display in profile summary" (final wording TBD).
2. We should add a column of checkboxes (just to the left of "Actions") labeled "Display" that also controls this feature.
To make sure the short profile looks good, we should limit the number of fields that can be displayed at a time. We can try a max of 2 to start with. If the maximum number of fields is already selected, all unselected checkboxes should be disabled, with a tooltip that says: "At most 2 custom profile fields can be displayed."
## In the short profile
We should display the fields in the following order:
1. Name
2. Role
3. Custom field 1
4. Custom field 2
4. Local time
The order of the profile fields in the short profile should match the order in the settings.
To save space, we should try just showing the content of the profile field, with the title of the profile field displayed in a tooltip on hover.
If the user leaves the field blank, we should not show a blank line for it.
----
This issue also solves #21214.
[CZO discussion thread](https://chat.zulip.org/#narrow/stream/101-design/topic/pronouns.20in.20profile.20card) | priority | allow custom profile fields to be shown in profile card at present a user s profile card displays their name local time and role organizations may find it useful to display additional fields there such as pronouns github username job title team etc and we should make it possible to do so in organization settings custom profile fields all field types other than long text or person should have a checkbox option shown below the hint to display in profile summary final wording tbd we should add a column of checkboxes just to the left of actions labeled display that also controls this feature to make sure the short profile looks good we should limit the number of fields that can be displayed at a time we can try a max of to start with if the maximum number of fields is already selected all unselected checkboxes should be disabled with a tooltip that says at most custom profile fields can be displayed in the short profile we should display the fields in the following order name role custom field custom field local time the order of the profile fields in the short profile should match the order in the settings to save space we should try just showing the content of the profile field with the title of the profile field displayed in a tooltip on hover if the user leaves the field blank we should not show a blank line for it this issue also solves | 1 |
273,791 | 23,786,103,489 | IssuesEvent | 2022-09-02 10:13:06 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Add automated tests for resizable editor functionality | Automated Testing Needs Dev [Status] Stale | **Is your feature request related to a problem? Please describe.**
The resizable editor work from PR https://github.com/WordPress/gutenberg/pull/19082 needs some form of automated testing.
**Describe the solution you'd like**
Ideally we should add e2e tests as discussed in [this thread](https://github.com/WordPress/gutenberg/pull/19082#discussion_r375735554) and perhaps unit tests for some of the helper functions.
**Describe alternatives you've considered**
See thread linked to above.
| 1.0 | Add automated tests for resizable editor functionality - **Is your feature request related to a problem? Please describe.**
The resizable editor work from PR https://github.com/WordPress/gutenberg/pull/19082 needs some form of automated testing.
**Describe the solution you'd like**
Ideally we should add e2e tests as discussed in [this thread](https://github.com/WordPress/gutenberg/pull/19082#discussion_r375735554) and perhaps unit tests for some of the helper functions.
**Describe alternatives you've considered**
See thread linked to above.
| non_priority | add automated tests for resizable editor functionality is your feature request related to a problem please describe the resizable editor work from pr needs some form of automated testing describe the solution you d like ideally we should add tests as discussed in and perhaps unit tests for some of the helper functions describe alternatives you ve considered see thread linked to above | 0 |
52,955 | 6,287,542,755 | IssuesEvent | 2017-07-19 15:09:37 | CLARIAH/wp5_mediasuite | https://api.github.com/repos/CLARIAH/wp5_mediasuite | closed | Connect user annotations (video) to user who added them | Effort: improvement Function: annotation (video) Importance: medium MS-Component-function ToDo: testing needed Work: functionality | This is for users to see which annotations they made (attached to their user, as logged in the surfConnex). | 1.0 | Connect user annotations (video) to user who added them - This is for users to see which annotations they made (attached to their user, as logged in the surfConnex). | non_priority | connect user annotations video to user who added them this is for users to see which annotations they made attached to their user as logged in the surfconnex | 0 |
131,187 | 10,684,066,983 | IssuesEvent | 2019-10-22 09:40:26 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | CI Failures: AzureBlobStoreRepositoryTests.testIndicesDeletedFromRepository | :Distributed/Snapshot/Restore >test-failure | This test has failed periodically (but consistently):
https://groups.google.com/a/elastic.co/forum/#!searchin/build-elasticsearch/AzureBlobStoreRepositoryTests$20testIndicesDeletedFromRepository%7Csort:date
I have not been able to reproduce but (like #47380) it also looks like its coming from the Azure plugin:
```
13:09:53 1> Caused by: com.microsoft.azure.storage.StorageException: An unknown failure occurred : Unexpected end of file from server
13:09:53 1> at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:67) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:220) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadFullBlob(CloudBlockBlob.java:1035) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.upload(CloudBlockBlob.java:864) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.upload(CloudBlockBlob.java:743) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureStorageService.lambda$writeBlob$15(AzureStorageService.java:332) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.SocketAccess.lambda$doPrivilegedVoidException$0(SocketAccess.java:64) ~[main/:?]
13:09:53 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
13:09:53 1> at org.elasticsearch.repositories.azure.SocketAccess.doPrivilegedVoidException(SocketAccess.java:63) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureStorageService.writeBlob(AzureStorageService.java:331) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureBlobStore.writeBlob(AzureBlobStore.java:120) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureBlobContainer.writeBlob(AzureBlobContainer.java:102) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotFile(BlobStoreRepository.java:1358) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
13:09:53 1> at org.elasticsearch.repositories.blobstore.BlobStoreRepository$2.doRun(BlobStoreRepository.java:1115) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
13:09:53 1> ... 5 more
```
```13:09:53 2> REPRODUCE WITH: ./gradlew ':plugins:repository-azure:test' --tests "org.elasticsearch.repositories.azure.AzureBlobStoreRepositoryTests.testIndicesDeletedFromRepository" -Dtests.seed=D48C184AD8376ABD -Dtests.security.manager=true -Dtests.locale=lv-LV -Dtests.timezone=Pacific/Pitcairn -Dcompiler.java=12 -Druntime.java=11``` | 1.0 | CI Failures: AzureBlobStoreRepositoryTests.testIndicesDeletedFromRepository - This test has failed periodically (but consistently):
https://groups.google.com/a/elastic.co/forum/#!searchin/build-elasticsearch/AzureBlobStoreRepositoryTests$20testIndicesDeletedFromRepository%7Csort:date
I have not been able to reproduce but (like #47380) it also looks like its coming from the Azure plugin:
```
13:09:53 1> Caused by: com.microsoft.azure.storage.StorageException: An unknown failure occurred : Unexpected end of file from server
13:09:53 1> at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:67) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:220) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadFullBlob(CloudBlockBlob.java:1035) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.upload(CloudBlockBlob.java:864) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at com.microsoft.azure.storage.blob.CloudBlockBlob.upload(CloudBlockBlob.java:743) ~[azure-storage-8.4.0.jar:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureStorageService.lambda$writeBlob$15(AzureStorageService.java:332) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.SocketAccess.lambda$doPrivilegedVoidException$0(SocketAccess.java:64) ~[main/:?]
13:09:53 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
13:09:53 1> at org.elasticsearch.repositories.azure.SocketAccess.doPrivilegedVoidException(SocketAccess.java:63) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureStorageService.writeBlob(AzureStorageService.java:331) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureBlobStore.writeBlob(AzureBlobStore.java:120) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.azure.AzureBlobContainer.writeBlob(AzureBlobContainer.java:102) ~[main/:?]
13:09:53 1> at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotFile(BlobStoreRepository.java:1358) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
13:09:53 1> at org.elasticsearch.repositories.blobstore.BlobStoreRepository$2.doRun(BlobStoreRepository.java:1115) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
13:09:53 1> ... 5 more
```
```13:09:53 2> REPRODUCE WITH: ./gradlew ':plugins:repository-azure:test' --tests "org.elasticsearch.repositories.azure.AzureBlobStoreRepositoryTests.testIndicesDeletedFromRepository" -Dtests.seed=D48C184AD8376ABD -Dtests.security.manager=true -Dtests.locale=lv-LV -Dtests.timezone=Pacific/Pitcairn -Dcompiler.java=12 -Druntime.java=11``` | non_priority | ci failures azureblobstorerepositorytests testindicesdeletedfromrepository this test has failed periodically but consistently i have not been able to reproduce but like it also looks like its coming from the azure plugin caused by com microsoft azure storage storageexception an unknown failure occurred unexpected end of file from server at com microsoft azure storage storageexception translateexception storageexception java at com microsoft azure storage core executionengine executewithretry executionengine java at com microsoft azure storage blob cloudblockblob uploadfullblob cloudblockblob java at com microsoft azure storage blob cloudblockblob upload cloudblockblob java at com microsoft azure storage blob cloudblockblob upload cloudblockblob java at org elasticsearch repositories azure azurestorageservice lambda writeblob azurestorageservice java at org elasticsearch repositories azure socketaccess lambda doprivilegedvoidexception socketaccess java at java security accesscontroller doprivileged native method at org elasticsearch repositories azure socketaccess doprivilegedvoidexception socketaccess java at org elasticsearch repositories azure azurestorageservice writeblob azurestorageservice java at org elasticsearch repositories azure azureblobstore writeblob azureblobstore java at org elasticsearch repositories azure azureblobcontainer writeblob azureblobcontainer java at org elasticsearch repositories blobstore blobstorerepository snapshotfile blobstorerepository java at org elasticsearch repositories blobstore blobstorerepository dorun blobstorerepository java more reproduce with gradlew plugins repository azure test tests org elasticsearch repositories azure azureblobstorerepositorytests testindicesdeletedfromrepository dtests seed dtests security manager true dtests locale lv lv dtests timezone pacific pitcairn dcompiler java druntime java | 0 |
652,038 | 21,518,837,534 | IssuesEvent | 2022-04-28 12:32:54 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | opened | Rescue the poop emoji! | High priority Group-1-2022 | Someone have removed the holy poop emoji! We need to bring it back! Its a staple for the diagram!

| 1.0 | Rescue the poop emoji! - Someone have removed the holy poop emoji! We need to bring it back! Its a staple for the diagram!

| priority | rescue the poop emoji someone have removed the holy poop emoji we need to bring it back its a staple for the diagram | 1 |
344,121 | 30,717,183,948 | IssuesEvent | 2023-07-27 13:44:44 | Nexters/o-escape-be | https://api.github.com/repos/Nexters/o-escape-be | closed | 업체 테스트 코드 작성 | 📃test | ### Issue 타입
- [x] 테스트
### 이슈 상세 내용
- 업체(shop) 단위 테스트 작성
### 체크리스트
- [ ] API 명세 기반 모든 로직의 테스트를 작성 하였나요?
- [ ] 작성한 코드가 의도대로 작동하였나요? | 1.0 | 업체 테스트 코드 작성 - ### Issue 타입
- [x] 테스트
### 이슈 상세 내용
- 업체(shop) 단위 테스트 작성
### 체크리스트
- [ ] API 명세 기반 모든 로직의 테스트를 작성 하였나요?
- [ ] 작성한 코드가 의도대로 작동하였나요? | non_priority | 업체 테스트 코드 작성 issue 타입 테스트 이슈 상세 내용 업체 shop 단위 테스트 작성 체크리스트 api 명세 기반 모든 로직의 테스트를 작성 하였나요 작성한 코드가 의도대로 작동하였나요 | 0 |
114,571 | 11,851,672,540 | IssuesEvent | 2020-03-24 18:30:10 | yarnpkg/berry | https://api.github.com/repos/yarnpkg/berry | closed | [Bug] Docs say deferred versions are stored in .yarn/releases; are actually stored in .yarn/versions | bug documentation | - [ ] I'd be willing to implement a fix
**Describe the bug**
Docs here: https://yarnpkg.com/features/release-workflow
Say that deferred versions (`yarn version -d`) are stored in .yarn/releases but it looks like they actually get stored in .yarn/versions.
I omitted the rest of the issue template because this looks like a typo in the docs; not a runtime bug.
<!--
**To Reproduce**
**Screenshots**
**Environment if relevant (please complete the following information):**
- OS: [e.g. OSX, Linux, Windows, ...]
- Node version [e.g. 8.15.0, 10.15.1, ...]
- Yarn version [e.g. 2.0.0-rc1, ...]
**Additional context**
Add any other context about the problem here.
--> | 1.0 | [Bug] Docs say deferred versions are stored in .yarn/releases; are actually stored in .yarn/versions - - [ ] I'd be willing to implement a fix
**Describe the bug**
Docs here: https://yarnpkg.com/features/release-workflow
Say that deferred versions (`yarn version -d`) are stored in .yarn/releases but it looks like they actually get stored in .yarn/versions.
I omitted the rest of the issue template because this looks like a typo in the docs; not a runtime bug.
<!--
**To Reproduce**
**Screenshots**
**Environment if relevant (please complete the following information):**
- OS: [e.g. OSX, Linux, Windows, ...]
- Node version [e.g. 8.15.0, 10.15.1, ...]
- Yarn version [e.g. 2.0.0-rc1, ...]
**Additional context**
Add any other context about the problem here.
--> | non_priority | docs say deferred versions are stored in yarn releases are actually stored in yarn versions i d be willing to implement a fix describe the bug docs here say that deferred versions yarn version d are stored in yarn releases but it looks like they actually get stored in yarn versions i omitted the rest of the issue template because this looks like a typo in the docs not a runtime bug to reproduce screenshots environment if relevant please complete the following information os node version yarn version additional context add any other context about the problem here | 0 |
86,438 | 24,850,291,124 | IssuesEvent | 2022-10-26 19:25:34 | regen-network/regen-ledger | https://api.github.com/repos/regen-network/regen-ledger | closed | Set up completions for goreleaser | Type: Build | ## Summary
<!-- Short, concise description of the proposed feature -->
Ref https://github.com/regen-network/regen-ledger/pull/1546#issue-1408540078
https://carlosbecker.com/posts/golang-completions-cobra/
> users would have to enable completions manually. I think it’s nicer to enable the completions upon installing the package, and since I release my Go projects with [GoReleaser](https://goreleaser.com/), I can leverage it to handle this for me.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| 1.0 | Set up completions for goreleaser - ## Summary
<!-- Short, concise description of the proposed feature -->
Ref https://github.com/regen-network/regen-ledger/pull/1546#issue-1408540078
https://carlosbecker.com/posts/golang-completions-cobra/
> users would have to enable completions manually. I think it’s nicer to enable the completions upon installing the package, and since I release my Go projects with [GoReleaser](https://goreleaser.com/), I can leverage it to handle this for me.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| non_priority | set up completions for goreleaser summary ref users would have to enable completions manually i think it’s nicer to enable the completions upon installing the package and since i release my go projects with i can leverage it to handle this for me for admin use not duplicate issue appropriate labels applied appropriate contributors tagged contributor assigned self assigned | 0 |
772,922 | 27,141,683,159 | IssuesEvent | 2023-02-16 16:46:40 | googleapis/python-bigquery-sqlalchemy | https://api.github.com/repos/googleapis/python-bigquery-sqlalchemy | closed | tests.sqlalchemy_dialect_compliance.test_dialect_compliance.TimeTest_bigquery+bigquery: test_null_bound_comparison failed | type: bug priority: p1 flakybot: issue api: bigquery | Note: #694 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 074321ddaa10001773e7e6044f4a0df1bb530331
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/82580611-18ba-4d10-8e25-a7bf752e1da1), [Sponge](http://sponge2/82580611-18ba-4d10-8e25-a7bf752e1da1)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 311, in from_call
result: Optional[TResult] = func()
^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 255, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 175, in pytest_runtest_teardown
item.session._setupstate.teardown_exact(item, nextitem)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 419, in teardown_exact
self._teardown_towards(needed_collectors)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 434, in _teardown_towards
raise exc
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 427, in _teardown_towards
self._pop_and_teardown()
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 387, in _pop_and_teardown
self._teardown_with_finalization(colitem)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 408, in _teardown_with_finalization
assert colitem in self.stack
^^^^^^^^^^^^^^^^^^^^^
AssertionError</pre></details> | 1.0 | tests.sqlalchemy_dialect_compliance.test_dialect_compliance.TimeTest_bigquery+bigquery: test_null_bound_comparison failed - Note: #694 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 074321ddaa10001773e7e6044f4a0df1bb530331
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/82580611-18ba-4d10-8e25-a7bf752e1da1), [Sponge](http://sponge2/82580611-18ba-4d10-8e25-a7bf752e1da1)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 311, in from_call
result: Optional[TResult] = func()
^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 255, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 175, in pytest_runtest_teardown
item.session._setupstate.teardown_exact(item, nextitem)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 419, in teardown_exact
self._teardown_towards(needed_collectors)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 434, in _teardown_towards
raise exc
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 427, in _teardown_towards
self._pop_and_teardown()
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 387, in _pop_and_teardown
self._teardown_with_finalization(colitem)
File "/tmpfs/src/github/python-bigquery-sqlalchemy/.nox/compliance/lib/python3.11/site-packages/_pytest/runner.py", line 408, in _teardown_with_finalization
assert colitem in self.stack
^^^^^^^^^^^^^^^^^^^^^
AssertionError</pre></details> | priority | tests sqlalchemy dialect compliance test dialect compliance timetest bigquery bigquery test null bound comparison failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in from call result optional func file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in lambda ihook item item kwds when when reraise reraise file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pluggy hooks py line in call return self hookexec self name self get hookimpls kwargs firstresult file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pluggy manager py line in hookexec return self inner hookexec hook name methods kwargs firstresult file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pluggy callers py line in multicall return outcome get result file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pluggy result py line in get result raise ex with traceback ex file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pluggy callers py line in multicall res hook impl function args file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in pytest runtest teardown item session setupstate teardown exact item nextitem file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in teardown exact self teardown towards needed collectors file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in teardown towards raise exc file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in teardown towards self pop and teardown file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in pop and teardown self teardown with finalization colitem file tmpfs src github python bigquery sqlalchemy nox compliance lib site packages pytest runner py line in teardown with finalization assert colitem in self stack assertionerror | 1 |
130,279 | 18,061,419,456 | IssuesEvent | 2021-09-20 14:20:58 | hoprnet/hoprnet-org | https://api.github.com/repos/hoprnet/hoprnet-org | closed | /token Fix detail in "Token supply distribution"-chart | workflow:new issue type:design | # Page
/token - Token supply distribution (M)
# Current behavior
in the Token supply distribution (M)-chart and there in the black overlay:
instead of for example "Month 16" it says there "16_Month 16", due to some field-error.
# Expected behavior

- Please realize this in every language
| 1.0 | /token Fix detail in "Token supply distribution"-chart - # Page
/token - Token supply distribution (M)
# Current behavior
in the Token supply distribution (M)-chart and there in the black overlay:
instead of for example "Month 16" it says there "16_Month 16", due to some field-error.
# Expected behavior

- Please realize this in every language
| non_priority | token fix detail in token supply distribution chart page token token supply distribution m current behavior in the token supply distribution m chart and there in the black overlay instead of for example month it says there month due to some field error expected behavior please realize this in every language | 0 |
193,274 | 6,883,361,281 | IssuesEvent | 2017-11-21 09:08:34 | wordpress-mobile/AztecEditor-Android | https://api.github.com/repos/wordpress-mobile/AztecEditor-Android | closed | Add external interface to change caption shortcode | enhancement high priority | The Caption shortcode plugin supports captions in HTML but there is no easy way to modify it from Aztec client's perspective.
There should be an easy way to get and set the caption for a particular image based on its ID attribute. | 1.0 | Add external interface to change caption shortcode - The Caption shortcode plugin supports captions in HTML but there is no easy way to modify it from Aztec client's perspective.
There should be an easy way to get and set the caption for a particular image based on its ID attribute. | priority | add external interface to change caption shortcode the caption shortcode plugin supports captions in html but there is no easy way to modify it from aztec client s perspective there should be an easy way to get and set the caption for a particular image based on its id attribute | 1 |
530,503 | 15,429,804,066 | IssuesEvent | 2021-03-06 05:35:36 | dotnet/machinelearning-modelbuilder | https://api.github.com/repos/dotnet/machinelearning-modelbuilder | closed | image classification eval/try model doesn't work | Bug Bash Priority:0 Priority:1 | I added a new item, trained local image classification with CPU with this dataset:
https://www.kaggle.com/jeffheaton/iris-computer-vision
After training, in Evaluate page, I tried to test several images, but keep getting this error:

| 2.0 | image classification eval/try model doesn't work - I added a new item, trained local image classification with CPU with this dataset:
https://www.kaggle.com/jeffheaton/iris-computer-vision
After training, in Evaluate page, I tried to test several images, but keep getting this error:

| priority | image classification eval try model doesn t work i added a new item trained local image classification with cpu with this dataset after training in evaluate page i tried to test several images but keep getting this error | 1 |
516,767 | 14,987,604,888 | IssuesEvent | 2021-01-28 23:16:03 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | closed | spanner/spansql: generated columns | api: spanner priority: p2 type: feature request | **Client**
spansql Version v1.13.0
**Environment**
windows
**Go Environment**
$ go version : go version go1.13 windows/amd64
$ go env
If you extract the following example
CREATE TABLE users (
Id STRING(20) NOT NULL,
FirstName STRING(50),
LastName STRING(50),
Age INT64 NOT NULL,
FullName STRING(100) AS (ARRAY_TO_STRING([FirstName, LastName], " ")) STORED,
) PRIMARY KEY (Id);
You got error : ParseDDL("CREATE TABLE users (\n\tId STRING(20) NOT NULL,\n\tFirstName STRING(50),\n\tLastName STRING(50),\n\tAge INT64 NOT NULL,\n\tFullName STRING(100) AS (ARRAY_TO_STRING([FirstName, LastName], \" \")) STORED,\n) PRIMARY KEY (Id);"): filename:6: got "(" while expecting ")"
| 1.0 | spanner/spansql: generated columns - **Client**
spansql Version v1.13.0
**Environment**
windows
**Go Environment**
$ go version : go version go1.13 windows/amd64
$ go env
If you extract the following example
CREATE TABLE users (
Id STRING(20) NOT NULL,
FirstName STRING(50),
LastName STRING(50),
Age INT64 NOT NULL,
FullName STRING(100) AS (ARRAY_TO_STRING([FirstName, LastName], " ")) STORED,
) PRIMARY KEY (Id);
You got error : ParseDDL("CREATE TABLE users (\n\tId STRING(20) NOT NULL,\n\tFirstName STRING(50),\n\tLastName STRING(50),\n\tAge INT64 NOT NULL,\n\tFullName STRING(100) AS (ARRAY_TO_STRING([FirstName, LastName], \" \")) STORED,\n) PRIMARY KEY (Id);"): filename:6: got "(" while expecting ")"
| priority | spanner spansql generated columns client spansql version environment windows go environment go version go version windows go env if you extract the following example create table users id string not null firstname string lastname string age not null fullname string as array to string stored primary key id you got error parseddl create table users n tid string not null n tfirstname string n tlastname string n tage not null n tfullname string as array to string stored n primary key id filename got while expecting | 1 |
214,713 | 16,607,733,925 | IssuesEvent | 2021-06-02 07:07:43 | PlaceOS/drivers | https://api.github.com/repos/PlaceOS/drivers | closed | Stienel Driver | priority: high status: requires testing type: driver | **Driver Type**
Device
**Manufacturer**
Stienel
**Model/Service**
Model or Service
**Link to or Attach Device API or Protocol**
https://drive.google.com/drive/folders/1oLiUoWW5CSWyWPwTrDqQT-8SnBdPtc1N
**Describe any desired functionality**
- Control all aspects of device
**Additional context**
Add any other context about the driver request here.
| 1.0 | Stienel Driver - **Driver Type**
Device
**Manufacturer**
Stienel
**Model/Service**
Model or Service
**Link to or Attach Device API or Protocol**
https://drive.google.com/drive/folders/1oLiUoWW5CSWyWPwTrDqQT-8SnBdPtc1N
**Describe any desired functionality**
- Control all aspects of device
**Additional context**
Add any other context about the driver request here.
| non_priority | stienel driver driver type device manufacturer stienel model service model or service link to or attach device api or protocol describe any desired functionality control all aspects of device additional context add any other context about the driver request here | 0 |
755,746 | 26,438,359,228 | IssuesEvent | 2023-01-15 17:17:01 | xLPMG/EMailClient-GUI | https://api.github.com/repos/xLPMG/EMailClient-GUI | opened | pop3 folder scan does not work | bug low priority | MailReceiver.java:
`for (Folder folder : folders) {
System.out.println(folder.getFullName() + ": " + folder.getMessageCount());
}`
gibt nur `INBOX: -1` aus
| 1.0 | pop3 folder scan does not work - MailReceiver.java:
`for (Folder folder : folders) {
System.out.println(folder.getFullName() + ": " + folder.getMessageCount());
}`
gibt nur `INBOX: -1` aus
| priority | folder scan does not work mailreceiver java for folder folder folders system out println folder getfullname folder getmessagecount gibt nur inbox aus | 1 |
741,963 | 25,829,766,897 | IssuesEvent | 2022-12-12 15:20:06 | limesquid/favicon-thief | https://api.github.com/repos/limesquid/favicon-thief | closed | HTTP 403 - index page | Priority - Should have Type - Bug | - [x] brandbucket.com (quasi-fixed by a heuristic that guesses favicon URL)
- [ ] pixabay.com
- [ ] wa.me
- [ ] pexels.com
----
Found in https://github.com/limesquid/favicon-thief/pull/6 | 1.0 | HTTP 403 - index page - - [x] brandbucket.com (quasi-fixed by a heuristic that guesses favicon URL)
- [ ] pixabay.com
- [ ] wa.me
- [ ] pexels.com
----
Found in https://github.com/limesquid/favicon-thief/pull/6 | priority | http index page brandbucket com quasi fixed by a heuristic that guesses favicon url pixabay com wa me pexels com found in | 1 |
125,389 | 26,650,904,447 | IssuesEvent | 2023-01-25 13:37:26 | OudayAhmed/Assignment-1-DECIDE | https://api.github.com/repos/OudayAhmed/Assignment-1-DECIDE | opened | CMV-2 | code | Description: Implement a method for DECIDE() with PI and EPSILON as parameters.
There exists at least one set of three consecutive data points which form an angle such that:
angle < (PI−EPSILON)
or
angle > (PI+EPSILON)
The second of the three consecutive points is always the vertex of the angle. If either the first
point or the last point (or both) coincides with the vertex, the angle is undefined and the LIC
is not satisfied by those three points.
(0 ≤ EPSILON < PI) | 1.0 | CMV-2 - Description: Implement a method for DECIDE() with PI and EPSILON as parameters.
There exists at least one set of three consecutive data points which form an angle such that:
angle < (PI−EPSILON)
or
angle > (PI+EPSILON)
The second of the three consecutive points is always the vertex of the angle. If either the first
point or the last point (or both) coincides with the vertex, the angle is undefined and the LIC
is not satisfied by those three points.
(0 ≤ EPSILON < PI) | non_priority | cmv description implement a method for decide with pi and epsilon as parameters there exists at least one set of three consecutive data points which form an angle such that angle pi−epsilon or angle pi epsilon the second of the three consecutive points is always the vertex of the angle if either the first point or the last point or both coincides with the vertex the angle is undefined and the lic is not satisfied by those three points ≤ epsilon pi | 0 |
93,626 | 15,895,508,843 | IssuesEvent | 2021-04-11 14:14:47 | rammatzkvosky/123 | https://api.github.com/repos/rammatzkvosky/123 | opened | CVE-2020-36189 (High) detected in jackson-databind-2.8.8.jar | security vulnerability | ## CVE-2020-36189 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: 123/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/123/commit/d30fca67e915548a9a9d8fbcfe3506cf080ce3b5">d30fca67e915548a9a9d8fbcfe3506cf080ce3b5</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2996">https://github.com/FasterXML/jackson-databind/issues/2996</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-36189 (High) detected in jackson-databind-2.8.8.jar - ## CVE-2020-36189 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: 123/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/123/commit/d30fca67e915548a9a9d8fbcfe3506cf080ce3b5">d30fca67e915548a9a9d8fbcfe3506cf080ce3b5</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2996">https://github.com/FasterXML/jackson-databind/issues/2996</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource vulnerabilityurl | 0 |
275,902 | 8,582,018,477 | IssuesEvent | 2018-11-13 16:00:43 | blackbaud/skyux2 | https://api.github.com/repos/blackbaud/skyux2 | closed | Set focus to first focusable element when modal opens | Priority: Low accessibility sky-modal | From @Blackbaud-MattGregg SPA Navigation guidelines.
By default, focus should be set to first focusable element when a modal opens. The developer should be able to override this if necessary.
Will then need to add documentation about how to handle focus when the first focusable element would cause the top of the content to scroll out of view. | 1.0 | Set focus to first focusable element when modal opens - From @Blackbaud-MattGregg SPA Navigation guidelines.
By default, focus should be set to first focusable element when a modal opens. The developer should be able to override this if necessary.
Will then need to add documentation about how to handle focus when the first focusable element would cause the top of the content to scroll out of view. | priority | set focus to first focusable element when modal opens from blackbaud mattgregg spa navigation guidelines by default focus should be set to first focusable element when a modal opens the developer should be able to override this if necessary will then need to add documentation about how to handle focus when the first focusable element would cause the top of the content to scroll out of view | 1 |
241,965 | 20,176,525,020 | IssuesEvent | 2022-02-10 14:57:11 | mennaelkashef/eShop | https://api.github.com/repos/mennaelkashef/eShop | opened | No description entered by the user. | Hello! RULE-GOT-APPLIED DOES-NOT-CONTAIN-STRING Rule-works-on-convert-to-bug test instabug | # :clipboard: Bug Details
>No description entered by the user.
key | value
--|--
Reported At | 2022-02-10 14:49:56 UTC
Email | imohamady@instabug.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 29
Device | Google AOSP on IA Emulator, OS Level 28
Display | 1080x2160 (xhdpi)
Location | Giza, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.developerOption.DeveloperOptionFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 51.7% - 0.75/1.46 GB
Used Storage | 13.1% - 0.76/5.81 GB
Connectivity | LTE - Android
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name 1258198343: key value bla bla bla la
key_name 1506889500: key value bla bla bla la
key_name -1350052564: key value bla bla bla la
key_name -79429900: key value bla bla bla la
key_name 561177101: key value bla bla bla la
key_name -2126000422: key value bla bla bla la
key_name -205819675: key value bla bla bla la
key_name -2042872541: key value bla bla bla la
key_name -393592358: key value bla bla bla la
key_name 2029529228: key value bla bla bla la
key_name -1463133082: key value bla bla bla la
key_name 1563755437: key value bla bla bla la
key_name 2089213946: key value bla bla bla la
key_name -1437521870: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.main.MainFragment was stopped.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was attached.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was created.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was started.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was resumed.
14:49:45 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
14:49:53 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
14:49:53 com.example.app.main.MainActivity was paused.
14:49:53 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was paused.
14:49:55 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
14:50:29 D/Instabug - APM(21769): Total duration: 490 ms
14:50:29 D/Instabug - APM(21769): Status code: 200.
14:50:29 D/Instabug - APM(21769): Attributes: {}
14:50:29 D/Instabug - APM(21769): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
14:50:29 D/Instabug - APM(21769): Total duration: 490 ms
14:50:29 D/Instabug - APM(21769): Status code: 200.
14:50:29 D/Instabug - APM(21769): Attributes: {}
14:50:31 V/FA (21769): Inactivity, disconnecting from the service
14:50:31 I/com.example.ap(21769): Explicit concurrent copying GC freed 77012(3MB) AllocSpace objects, 17(488KB) LOS objects, 50% free, 4MB/9MB, paused 1.059ms total 59.811ms
14:51:21 D/IB-AttachmentsUtility(21769): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :camera: Images
[](https://d38gnqwzxziyyy.cloudfront.net/attachments/bugs/17837483/70c2243e4cff9503a3a92f19aa2ebb7a_original/24983234/bug_1644504593852_.jpg?Expires=4800178630&Signature=IArDFHA4W5-1eCQJAPVnORt3tBg9gS~eVdG3hdhatt9kBMue-DwswQlG-jwZyH5xiXCKKAkw~FXgKC9mSo1PU7RRFaTLATaXs~cAQfKO8LGrH50q4m1B4LqFirxmVtznL~qHfsUVd6KX-qpD2Q8ftpfkiWW7gOvTB-6jGg9~RL9oYEkWpTyHXmGUxyX~yf2M-mZrmAwh-iLc0o5x7PjuDjVIzVy~8cZkWEjHtxlaUYFdCQtN3TDU2XVqJ9O7aOaLbR9~C3ga4xuMgZQ8u6yrvWVu3IoWPstkCxI2sZn6spfIZMNJcEwiM8ZHnrn~hFq3HjzmB4cKDOk2m-vqMBQSTw__&Key-Pair-Id=APKAIXAG65U6UUX7JAQQ)
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | 1.0 | No description entered by the user. - # :clipboard: Bug Details
>No description entered by the user.
key | value
--|--
Reported At | 2022-02-10 14:49:56 UTC
Email | imohamady@instabug.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 29
Device | Google AOSP on IA Emulator, OS Level 28
Display | 1080x2160 (xhdpi)
Location | Giza, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.developerOption.DeveloperOptionFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 51.7% - 0.75/1.46 GB
Used Storage | 13.1% - 0.76/5.81 GB
Connectivity | LTE - Android
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name 1258198343: key value bla bla bla la
key_name 1506889500: key value bla bla bla la
key_name -1350052564: key value bla bla bla la
key_name -79429900: key value bla bla bla la
key_name 561177101: key value bla bla bla la
key_name -2126000422: key value bla bla bla la
key_name -205819675: key value bla bla bla la
key_name -2042872541: key value bla bla bla la
key_name -393592358: key value bla bla bla la
key_name 2029529228: key value bla bla bla la
key_name -1463133082: key value bla bla bla la
key_name 1563755437: key value bla bla bla la
key_name 2089213946: key value bla bla bla la
key_name -1437521870: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.main.MainFragment was stopped.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was attached.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was created.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was started.
14:49:44 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was resumed.
14:49:45 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
14:49:53 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
14:49:53 com.example.app.main.MainActivity was paused.
14:49:53 In activity com.example.app.main.MainActivity: fragment com.example.app.developerOption.DeveloperOptionFragment was paused.
14:49:55 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
14:50:29 D/Instabug - APM(21769): Total duration: 490 ms
14:50:29 D/Instabug - APM(21769): Status code: 200.
14:50:29 D/Instabug - APM(21769): Attributes: {}
14:50:29 D/Instabug - APM(21769): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
14:50:29 D/Instabug - APM(21769): Total duration: 490 ms
14:50:29 D/Instabug - APM(21769): Status code: 200.
14:50:29 D/Instabug - APM(21769): Attributes: {}
14:50:31 V/FA (21769): Inactivity, disconnecting from the service
14:50:31 I/com.example.ap(21769): Explicit concurrent copying GC freed 77012(3MB) AllocSpace objects, 17(488KB) LOS objects, 50% free, 4MB/9MB, paused 1.059ms total 59.811ms
14:51:21 D/IB-AttachmentsUtility(21769): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8416?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :camera: Images
[](https://d38gnqwzxziyyy.cloudfront.net/attachments/bugs/17837483/70c2243e4cff9503a3a92f19aa2ebb7a_original/24983234/bug_1644504593852_.jpg?Expires=4800178630&Signature=IArDFHA4W5-1eCQJAPVnORt3tBg9gS~eVdG3hdhatt9kBMue-DwswQlG-jwZyH5xiXCKKAkw~FXgKC9mSo1PU7RRFaTLATaXs~cAQfKO8LGrH50q4m1B4LqFirxmVtznL~qHfsUVd6KX-qpD2Q8ftpfkiWW7gOvTB-6jGg9~RL9oYEkWpTyHXmGUxyX~yf2M-mZrmAwh-iLc0o5x7PjuDjVIzVy~8cZkWEjHtxlaUYFdCQtN3TDU2XVqJ9O7aOaLbR9~C3ga4xuMgZQ8u6yrvWVu3IoWPstkCxI2sZn6spfIZMNJcEwiM8ZHnrn~hFq3HjzmB4cKDOk2m-vqMBQSTw__&Key-Pair-Id=APKAIXAG65U6UUX7JAQQ)
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | non_priority | no description entered by the user clipboard bug details no description entered by the user key value reported at utc email imohamady instabug com categories report a bug tags test hello rule got applied does not contain string rule works on convert to bug instabug app version session duration device google aosp on ia emulator os level display xhdpi location giza egypt en point right point left iphone view hierarchy this bug was reported from com example app developeroption developeroptionfragment find its interactive view hierarchy with all its subviews here point right point left chart with downwards trend session profiler here is what the app was doing right before the bug was reported key value used memory gb used storage gb connectivity lte android battery unplugged orientation portrait find all the changes that happened in the parameters mentioned above during the last seconds before the bug was reported here point right point left bust in silhouette user info user attributes key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la mag right logs user steps here are the last steps done by the user right before the bug was reported in activity com example app main mainactivity fragment com example app main mainfragment was stopped in activity com example app main mainactivity fragment com example app developeroption developeroptionfragment was attached in activity com example app main mainactivity fragment com example app developeroption developeroptionfragment was created in activity com example app main mainactivity fragment com example app developeroption developeroptionfragment was started in activity com example app main mainactivity fragment com example app developeroption developeroptionfragment was resumed tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity com example app main mainactivity was paused in activity com example app main mainactivity fragment com example app developeroption developeroptionfragment was paused tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity find all the user steps done by the user throughout the session here point right point left console log here are the last console logs logged right before the bug was reported d instabug apm total duration ms d instabug apm status code d instabug apm attributes d instabug apm request has succeeded d instabug apm total duration ms d instabug apm status code d instabug apm attributes v fa inactivity disconnecting from the service i com example ap explicit concurrent copying gc freed allocspace objects los objects free paused total d ib attachmentsutility encryptattachments find all the logged console logs throughout the session here point right point left camera images warning looking for more details network log we are unable to capture your network requests automatically if you are using httpurlconnection or okhttp requests user events start capturing custom user events to send them along with each report instabug log start adding instabug logs to see them right inside each report you receive | 0 |
347,299 | 31,154,858,651 | IssuesEvent | 2023-08-16 12:31:18 | epam/ketcher | https://api.github.com/repos/epam/ketcher | opened | Autotests: Manipulations with Bond Tool #1734 | Autotests | - Add tests to Bond Tool
- Plan to automate tests
| 1.0 | Autotests: Manipulations with Bond Tool #1734 - - Add tests to Bond Tool
- Plan to automate tests
| non_priority | autotests manipulations with bond tool add tests to bond tool plan to automate tests | 0 |
318,076 | 27,283,560,863 | IssuesEvent | 2023-02-23 11:51:56 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Godot 4 errors in exported debug file and gpu driver crash in editor after some time | bug topic:rendering needs testing topic:3d | ### Godot version
v4.0.dev.calinou [78f0d2d1d]
### System information
Win 10 64, AMD RX 480
### Issue description
While Testing "Godot 4.0 AAA Graphics? "The Junk Shop" after exporting the project to an debug exe some spam errors show up(some of them are known). I did disable the enviro effects and lowered some other to increase the fps.
godot log:
> ERROR: Attempted to free invalid ID: 2100239007746
> at: _free_internal (drivers/vulkan/rendering_device_vulkan.cpp:8313) - Attempted to free invalid ID: 2100239007746
> CORE API HASH: 13245607664466767104
> EDITOR API HASH: 358842281
> Loaded builtin certs
> Using present mode: VK_PRESENT_MODE_FIFO_KHR
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> Project FPS: 1 (1000.0 mspf)
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2211) - Condition "!rb" is true. Returning: false
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> Project FPS: 1 (1000.0 mspf)
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2211) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> ERROR: Condition "!material" is true.
> at: material_update_dependency (servers/rendering/renderer_rd/renderer_storage_rd.cpp:1682) - Condition "!material" is true.
> ERROR: Attempted to free invalid ID: 0
> at: _free_internal (drivers/vulkan/rendering_device_vulkan.cpp:8313) - Attempted to free invalid ID: 0
> ERROR: Attempted to free invalid ID: 0
> ERROR: 1 RID allocations of type 'N17RendererStorageRD7TextureE' were leaked at exit.
> WARNING: 1 RID of type "UniformBuffer" was leaked.
> at: _free_rids (drivers/vulkan/rendering_device_vulkan.cpp:8836) - 1 RID of type "UniformBuffer" was leaked.
> WARNING: 1 RID of type "VertexArray" was leaked.
> at: _free_rids (drivers/vulkan/rendering_device_vulkan.cpp:8836) - 1 RID of type "VertexArray" was leaked.
> WARNING: 6 RIDs of type "Texture" were leaked.
> at: finalize (drivers/vulkan/rendering_device_vulkan.cpp:9055) - 6 RIDs of type "Texture" were leaked.
> ERROR: Condition "p_ptr == nullptr" is true.
> at: free_static (core/os/memory.cpp:148) - Condition "p_ptr == nullptr" is true.
> ERROR: Condition "p_ptr == nullptr" is true.
> at: free_static (core/os/memory.cpp:148) - Condition "p_ptr == nullptr" is true.
> StringName: 19 unclaimed string names at exit.
Also by testing the project once by playing or just in the editor my AMD gpu driver crash.
one more log from the previously night build:
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> Project FPS: 22 (45.4 mspf)
> Project FPS: 24 (41.6 mspf)
> Project FPS: 42 (23.8 mspf)
> Project FPS: 49 (20.4 mspf)
> ERROR: Condition "err" is true. Returning: ERR_CANT_CREATE
> at: swap_buffers (drivers/vulkan/vulkan_context.cpp:1856) - Condition "err" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
### Steps to reproduce
...
### Minimal reproduction project
https://github.com/everythingandnothingdev/godot-blender-2.81-splash | 1.0 | Godot 4 errors in exported debug file and gpu driver crash in editor after some time - ### Godot version
v4.0.dev.calinou [78f0d2d1d]
### System information
Win 10 64, AMD RX 480
### Issue description
While Testing "Godot 4.0 AAA Graphics? "The Junk Shop" after exporting the project to an debug exe some spam errors show up(some of them are known). I did disable the enviro effects and lowered some other to increase the fps.
godot log:
> ERROR: Attempted to free invalid ID: 2100239007746
> at: _free_internal (drivers/vulkan/rendering_device_vulkan.cpp:8313) - Attempted to free invalid ID: 2100239007746
> CORE API HASH: 13245607664466767104
> EDITOR API HASH: 358842281
> Loaded builtin certs
> Using present mode: VK_PRESENT_MODE_FIFO_KHR
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> Project FPS: 1 (1000.0 mspf)
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2211) - Condition "!rb" is true. Returning: false
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> Project FPS: 1 (1000.0 mspf)
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2211) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> ERROR: Attempted to use an unused shader variant (shader is null),
> at: get_render_pipeline (./servers/rendering/renderer_rd/pipeline_cache_rd.h:75) - Condition "shader.is_null()" is true. Returning: RID()
> ERROR: Condition "!pipeline" is true.
> at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7078) - Condition "!pipeline" is true.
> ERROR: The vertex format used to create the pipeline does not match the vertex format bound.
> at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7285) - Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
> ERROR: Condition "!material" is true.
> at: material_update_dependency (servers/rendering/renderer_rd/renderer_storage_rd.cpp:1682) - Condition "!material" is true.
> ERROR: Attempted to free invalid ID: 0
> at: _free_internal (drivers/vulkan/rendering_device_vulkan.cpp:8313) - Attempted to free invalid ID: 0
> ERROR: Attempted to free invalid ID: 0
> ERROR: 1 RID allocations of type 'N17RendererStorageRD7TextureE' were leaked at exit.
> WARNING: 1 RID of type "UniformBuffer" was leaked.
> at: _free_rids (drivers/vulkan/rendering_device_vulkan.cpp:8836) - 1 RID of type "UniformBuffer" was leaked.
> WARNING: 1 RID of type "VertexArray" was leaked.
> at: _free_rids (drivers/vulkan/rendering_device_vulkan.cpp:8836) - 1 RID of type "VertexArray" was leaked.
> WARNING: 6 RIDs of type "Texture" were leaked.
> at: finalize (drivers/vulkan/rendering_device_vulkan.cpp:9055) - 6 RIDs of type "Texture" were leaked.
> ERROR: Condition "p_ptr == nullptr" is true.
> at: free_static (core/os/memory.cpp:148) - Condition "p_ptr == nullptr" is true.
> ERROR: Condition "p_ptr == nullptr" is true.
> at: free_static (core/os/memory.cpp:148) - Condition "p_ptr == nullptr" is true.
> StringName: 19 unclaimed string names at exit.
Also by testing the project once by playing or just in the editor my AMD gpu driver crash.
one more log from the previously night build:
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> ERROR: Condition "!rb" is true. Returning: false
> at: render_buffers_has_volumetric_fog (servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:2094) - Condition "!rb" is true. Returning: false
> Project FPS: 22 (45.4 mspf)
> Project FPS: 24 (41.6 mspf)
> Project FPS: 42 (23.8 mspf)
> Project FPS: 49 (20.4 mspf)
> ERROR: Condition "err" is true. Returning: ERR_CANT_CREATE
> at: swap_buffers (drivers/vulkan/vulkan_context.cpp:1856) - Condition "err" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
> at: _buffer_update (drivers/vulkan/rendering_device_vulkan.cpp:1602) - Condition "vkerr" is true. Returning: ERR_CANT_CREATE
> ERROR: vmaMapMemory failed with error -5.
### Steps to reproduce
...
### Minimal reproduction project
https://github.com/everythingandnothingdev/godot-blender-2.81-splash | non_priority | godot errors in exported debug file and gpu driver crash in editor after some time godot version dev calinou system information win amd rx issue description while testing godot aaa graphics the junk shop after exporting the project to an debug exe some spam errors show up some of them are known i did disable the enviro effects and lowered some other to increase the fps godot log error attempted to free invalid id at free internal drivers vulkan rendering device vulkan cpp attempted to free invalid id core api hash editor api hash loaded builtin certs using present mode vk present mode fifo khr error attempted to use an unused shader variant shader is null at get render pipeline servers rendering renderer rd pipeline cache rd h condition shader is null is true returning rid error condition pipeline is true at draw list bind render pipeline drivers vulkan rendering device vulkan cpp condition pipeline is true error the vertex format used to create the pipeline does not match the vertex format bound at draw list draw drivers vulkan rendering device vulkan cpp condition dl validation pipeline vertex format dl validation vertex format is true project fps mspf error condition rb is true returning false at render buffers has volumetric fog servers rendering renderer rd renderer scene render rd cpp condition rb is true returning false error attempted to use an unused shader variant shader is null at get render pipeline servers rendering renderer rd pipeline cache rd h condition shader is null is true returning rid error condition pipeline is true at draw list bind render pipeline drivers vulkan rendering device vulkan cpp condition pipeline is true error the vertex format used to create the pipeline does not match the vertex format bound at draw list draw drivers vulkan rendering device vulkan cpp condition dl validation pipeline vertex format dl validation vertex format is true project fps mspf error condition rb is true returning false at render buffers has volumetric fog servers rendering renderer rd renderer scene render rd cpp condition rb is true returning false error condition pipeline is true at draw list bind render pipeline drivers vulkan rendering device vulkan cpp condition pipeline is true error the vertex format used to create the pipeline does not match the vertex format bound at draw list draw drivers vulkan rendering device vulkan cpp condition dl validation pipeline vertex format dl validation vertex format is true error attempted to use an unused shader variant shader is null at get render pipeline servers rendering renderer rd pipeline cache rd h condition shader is null is true returning rid error condition pipeline is true at draw list bind render pipeline drivers vulkan rendering device vulkan cpp condition pipeline is true error the vertex format used to create the pipeline does not match the vertex format bound at draw list draw drivers vulkan rendering device vulkan cpp condition dl validation pipeline vertex format dl validation vertex format is true error condition material is true at material update dependency servers rendering renderer rd renderer storage rd cpp condition material is true error attempted to free invalid id at free internal drivers vulkan rendering device vulkan cpp attempted to free invalid id error attempted to free invalid id error rid allocations of type were leaked at exit warning rid of type uniformbuffer was leaked at free rids drivers vulkan rendering device vulkan cpp rid of type uniformbuffer was leaked warning rid of type vertexarray was leaked at free rids drivers vulkan rendering device vulkan cpp rid of type vertexarray was leaked warning rids of type texture were leaked at finalize drivers vulkan rendering device vulkan cpp rids of type texture were leaked error condition p ptr nullptr is true at free static core os memory cpp condition p ptr nullptr is true error condition p ptr nullptr is true at free static core os memory cpp condition p ptr nullptr is true stringname unclaimed string names at exit also by testing the project once by playing or just in the editor my amd gpu driver crash one more log from the previously night build error condition rb is true returning false at render buffers has volumetric fog servers rendering renderer rd renderer scene render rd cpp condition rb is true returning false error condition rb is true returning false at render buffers has volumetric fog servers rendering renderer rd renderer scene render rd cpp condition rb is true returning false error condition rb is true returning false at render buffers has volumetric fog servers rendering renderer rd renderer scene render rd cpp condition rb is true returning false project fps mspf project fps mspf project fps mspf project fps mspf error condition err is true returning err cant create at swap buffers drivers vulkan vulkan context cpp condition err is true returning err cant create error vmamapmemory failed with error at buffer update drivers vulkan rendering device vulkan cpp condition vkerr is true returning err cant create error vmamapmemory failed with error at buffer update drivers vulkan rendering device vulkan cpp condition vkerr is true returning err cant create error vmamapmemory failed with error at buffer update drivers vulkan rendering device vulkan cpp condition vkerr is true returning err cant create error vmamapmemory failed with error at buffer update drivers vulkan rendering device vulkan cpp condition vkerr is true returning err cant create error vmamapmemory failed with error at buffer update drivers vulkan rendering device vulkan cpp condition vkerr is true returning err cant create error vmamapmemory failed with error steps to reproduce minimal reproduction project | 0 |
25,534 | 2,683,835,531 | IssuesEvent | 2015-03-28 11:13:58 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Maximus5 - 2009.09.23 - не работает с 4-мя ядрами ... | 2–5 stars bug imported Priority-Medium | _From [Aleksoid...@mail.ru](https://code.google.com/u/106652170189453997668/) on September 24, 2009 15:05:52_
Vista 7 x64, Far 2.0 build 1133 x86
Баг следующий - на 4-х ядерном проце(Phenom X4) ConEmu запускается всегда
только на двух ядрах(видно через диспетчер задач), более старые версии -
тоже самое.
И самое плохое в этом - что любая программа, запускаемая из под Far/ ConEmu - используется тоже только два ядра, пока не задашь все 4 с помощью
TaskManager.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=93_ | 1.0 | Maximus5 - 2009.09.23 - не работает с 4-мя ядрами ... - _From [Aleksoid...@mail.ru](https://code.google.com/u/106652170189453997668/) on September 24, 2009 15:05:52_
Vista 7 x64, Far 2.0 build 1133 x86
Баг следующий - на 4-х ядерном проце(Phenom X4) ConEmu запускается всегда
только на двух ядрах(видно через диспетчер задач), более старые версии -
тоже самое.
И самое плохое в этом - что любая программа, запускаемая из под Far/ ConEmu - используется тоже только два ядра, пока не задашь все 4 с помощью
TaskManager.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=93_ | priority | не работает с мя ядрами from on september vista far build баг следующий на х ядерном проце phenom conemu запускается всегда только на двух ядрах видно через диспетчер задач более старые версии тоже самое и самое плохое в этом что любая программа запускаемая из под far conemu используется тоже только два ядра пока не задашь все с помощью taskmanager original issue | 1 |
234,988 | 7,733,539,466 | IssuesEvent | 2018-05-26 13:07:33 | zetkin/www.zetk.in | https://api.github.com/repos/zetkin/www.zetk.in | closed | "Your bookings" not chronological | bug priority | The "your bookings" section of the dashboard does not render actions chronologically (see screenshot).

| 1.0 | "Your bookings" not chronological - The "your bookings" section of the dashboard does not render actions chronologically (see screenshot).

| priority | your bookings not chronological the your bookings section of the dashboard does not render actions chronologically see screenshot | 1 |
15,832 | 9,632,325,835 | IssuesEvent | 2019-05-15 15:56:37 | kimxogus/react-native-version-check | https://api.github.com/repos/kimxogus/react-native-version-check | closed | WS-2019-0047 (Medium) detected in tar-2.2.1.tgz | security vulnerability | ## WS-2019-0047 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /react-native-version-check/package.json</p>
<p>Path to vulnerable library: /tmp/git/react-native-version-check/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.13.3.tgz (Root Library)
- bootstrap-3.13.3.tgz
- run-lifecycle-3.13.0.tgz
- npm-lifecycle-2.1.0.tgz
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kimxogus/react-native-version-check/commit/cd00b13aefcbf963edda4b53246ce0976ac31094">cd00b13aefcbf963edda4b53246ce0976ac31094</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system's file with the contents of the extracted file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=https://github.com/npm/node-tar/commit/b0c58433c22f5e7fe8b1c76373f27e3f81dcd4c8>WS-2019-0047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/803">https://www.npmjs.com/advisories/803</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: 4.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0047 (Medium) detected in tar-2.2.1.tgz - ## WS-2019-0047 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /react-native-version-check/package.json</p>
<p>Path to vulnerable library: /tmp/git/react-native-version-check/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.13.3.tgz (Root Library)
- bootstrap-3.13.3.tgz
- run-lifecycle-3.13.0.tgz
- npm-lifecycle-2.1.0.tgz
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kimxogus/react-native-version-check/commit/cd00b13aefcbf963edda4b53246ce0976ac31094">cd00b13aefcbf963edda4b53246ce0976ac31094</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system's file with the contents of the extracted file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=https://github.com/npm/node-tar/commit/b0c58433c22f5e7fe8b1c76373f27e3f81dcd4c8>WS-2019-0047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/803">https://www.npmjs.com/advisories/803</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: 4.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in tar tgz ws medium severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file react native version check package json path to vulnerable library tmp git react native version check node modules tar package json dependency hierarchy lerna tgz root library bootstrap tgz run lifecycle tgz npm lifecycle tgz node gyp tgz x tar tgz vulnerable library found in head commit a href vulnerability details versions of node tar prior to are vulnerable to arbitrary file overwrite extracting tarballs containing a hardlink to a file that already exists in the system and a file that matches the hardlink will overwrite the system s file with the contents of the extracted file publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
149,064 | 19,563,439,156 | IssuesEvent | 2022-01-03 19:41:58 | Hugh-Cushing-Campaign/GoASQ | https://api.github.com/repos/Hugh-Cushing-Campaign/GoASQ | opened | CVE-2019-14806 (High) detected in Werkzeug-0.14.1-py2.py3-none-any.whl | security vulnerability | ## CVE-2019-14806 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Werkzeug-0.14.1-py2.py3-none-any.whl</b></p></summary>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /src/requirements.txt</p>
<p>Path to vulnerable library: /src/requirements.txt</p>
<p>
Dependency Hierarchy:
- Flask-Limiter-1.0.1.tar.gz (Root Library)
- Flask-0.12.2-py2.py3-none-any.whl
- :x: **Werkzeug-0.14.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hugh-Cushing-Campaign/GoASQ/commit/9eb887565ee089a83cad0d64fc13922ef3051be0">9eb887565ee089a83cad0d64fc13922ef3051be0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pallets Werkzeug before 0.15.3, when used with Docker, has insufficient debugger PIN randomness because Docker containers share the same machine id.
<p>Publish Date: 2019-08-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14806>CVE-2019-14806</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://palletsprojects.com/blog/werkzeug-0-15-3-released/">https://palletsprojects.com/blog/werkzeug-0-15-3-released/</a></p>
<p>Release Date: 2019-09-11</p>
<p>Fix Resolution: 0.15.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Werkzeug","packageVersion":"0.14.1","packageFilePaths":["/src/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"Flask-Limiter:1.0.1;Flask:0.12.2;Werkzeug:0.14.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.15.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-14806","vulnerabilityDetails":"Pallets Werkzeug before 0.15.3, when used with Docker, has insufficient debugger PIN randomness because Docker containers share the same machine id.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14806","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-14806 (High) detected in Werkzeug-0.14.1-py2.py3-none-any.whl - ## CVE-2019-14806 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Werkzeug-0.14.1-py2.py3-none-any.whl</b></p></summary>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /src/requirements.txt</p>
<p>Path to vulnerable library: /src/requirements.txt</p>
<p>
Dependency Hierarchy:
- Flask-Limiter-1.0.1.tar.gz (Root Library)
- Flask-0.12.2-py2.py3-none-any.whl
- :x: **Werkzeug-0.14.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hugh-Cushing-Campaign/GoASQ/commit/9eb887565ee089a83cad0d64fc13922ef3051be0">9eb887565ee089a83cad0d64fc13922ef3051be0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pallets Werkzeug before 0.15.3, when used with Docker, has insufficient debugger PIN randomness because Docker containers share the same machine id.
<p>Publish Date: 2019-08-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14806>CVE-2019-14806</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://palletsprojects.com/blog/werkzeug-0-15-3-released/">https://palletsprojects.com/blog/werkzeug-0-15-3-released/</a></p>
<p>Release Date: 2019-09-11</p>
<p>Fix Resolution: 0.15.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Werkzeug","packageVersion":"0.14.1","packageFilePaths":["/src/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"Flask-Limiter:1.0.1;Flask:0.12.2;Werkzeug:0.14.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.15.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-14806","vulnerabilityDetails":"Pallets Werkzeug before 0.15.3, when used with Docker, has insufficient debugger PIN randomness because Docker containers share the same machine id.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14806","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in werkzeug none any whl cve high severity vulnerability vulnerable library werkzeug none any whl the comprehensive wsgi web application library library home page a href path to dependency file src requirements txt path to vulnerable library src requirements txt dependency hierarchy flask limiter tar gz root library flask none any whl x werkzeug none any whl vulnerable library found in head commit a href found in base branch master vulnerability details pallets werkzeug before when used with docker has insufficient debugger pin randomness because docker containers share the same machine id publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree flask limiter flask werkzeug isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails pallets werkzeug before when used with docker has insufficient debugger pin randomness because docker containers share the same machine id vulnerabilityurl | 0 |
74,501 | 14,265,042,628 | IssuesEvent | 2020-11-20 16:33:30 | aws-samples/aws-secure-environment-accelerator | https://api.github.com/repos/aws-samples/aws-secure-environment-accelerator | opened | [BUG][Config Validation] z141-PBMM Accelerator Issue - VPC removal protections dropped | 1-Codebase 2-Bug/Issue 3-Planned | ## Short Problem Description
- while changes to a VPC configuration appear to still be blocked, one of my collegues removed (xx'd) out a VPC and the state machine allowed the change, executing and then failing when it could not remove the VPC
- ticket z002 (https://issues.amazon.com/issues/PBMM-83) was supposed to prevent this from happening along with other unsupported changes
- sometimes nacl changes are allowed sometimes blocked.
- we need to go back through and re-assess ALL blocks and make sure they are preventing all changes as originally intended
- we need to go back through and re-assess ALL "scoped overrides" and make sure they overide the check as intended
- 'ov-global-options': false,
- 'ov-del-accts': false,
- 'ov-ren-accts': false,
- 'ov-acct-email': false,
- 'ov-acct-ou': false,
- 'ov-acct-vpc': false,
- 'ov-acct-subnet': false,
- 'ov-tgw': false,
- 'ov-mad': false,
- 'ov-ou-vpc': false,
- 'ov-ou-subnet': false,
- 'ov-share-to-ou': false,
- 'ov-share-to-accounts': false,
- 'ov-nacl': false
- we need to add any new blocks required since we did z002.
### Example
FYI - For the ADDITIONALLY/OVERIDES ISSUE - Proserve was doing a deployment today. They turned on az-d which fails due to nacl updates (expected), they then incremented the nacl id's by one i.e. 100 becomes 101, and the comparison blocks, blocked the change. They then set {"ov-nacl": true} and it still failed. but {"overrideComparison": true} succeeded. Just more evidence on the "SCOPED" overides not functioning properly. Error was: ```{\"errorType\":\"Error\",\"errorMessage\":\"There were errors while comparing the configuration changes:\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\"trace\":[\"Error: There were errors while comparing the configuration changes:\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\" at Runtime.Xn [as handler] (/var/task/index.js:329:271518)\",\" at processTicksAndRejections (internal/process/task_queues.js:97:5)\"]}```
## Expected Behavior
- Accelerator fails with "ConfigCheck: blocked changing..." error (and should NOT continue)
- As z002 was a long time ago, can we also take a quick review to make sure other blocks remain in place. Are their any additional new blocks we need to add?
## Actual Behavior
- Accelerator does NOT block customer from making unsupported VPC removal change
| 1.0 | [BUG][Config Validation] z141-PBMM Accelerator Issue - VPC removal protections dropped - ## Short Problem Description
- while changes to a VPC configuration appear to still be blocked, one of my collegues removed (xx'd) out a VPC and the state machine allowed the change, executing and then failing when it could not remove the VPC
- ticket z002 (https://issues.amazon.com/issues/PBMM-83) was supposed to prevent this from happening along with other unsupported changes
- sometimes nacl changes are allowed sometimes blocked.
- we need to go back through and re-assess ALL blocks and make sure they are preventing all changes as originally intended
- we need to go back through and re-assess ALL "scoped overrides" and make sure they overide the check as intended
- 'ov-global-options': false,
- 'ov-del-accts': false,
- 'ov-ren-accts': false,
- 'ov-acct-email': false,
- 'ov-acct-ou': false,
- 'ov-acct-vpc': false,
- 'ov-acct-subnet': false,
- 'ov-tgw': false,
- 'ov-mad': false,
- 'ov-ou-vpc': false,
- 'ov-ou-subnet': false,
- 'ov-share-to-ou': false,
- 'ov-share-to-accounts': false,
- 'ov-nacl': false
- we need to add any new blocks required since we did z002.
### Example
FYI - For the ADDITIONALLY/OVERIDES ISSUE - Proserve was doing a deployment today. They turned on az-d which fails due to nacl updates (expected), they then incremented the nacl id's by one i.e. 100 becomes 101, and the comparison blocks, blocked the change. They then set {"ov-nacl": true} and it still failed. but {"overrideComparison": true} succeeded. Just more evidence on the "SCOPED" overides not functioning properly. Error was: ```{\"errorType\":\"Error\",\"errorMessage\":\"There were errors while comparing the configuration changes:\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\"trace\":[\"Error: There were errors while comparing the configuration changes:\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\" at Runtime.Xn [as handler] (/var/task/index.js:329:271518)\",\" at processTicksAndRejections (internal/process/task_queues.js:97:5)\"]}```
## Expected Behavior
- Accelerator fails with "ConfigCheck: blocked changing..." error (and should NOT continue)
- As z002 was a long time ago, can we also take a quick review to make sure other blocks remain in place. Are their any additional new blocks we need to add?
## Actual Behavior
- Accelerator does NOT block customer from making unsupported VPC removal change
| non_priority | pbmm accelerator issue vpc removal protections dropped short problem description while changes to a vpc configuration appear to still be blocked one of my collegues removed xx d out a vpc and the state machine allowed the change executing and then failing when it could not remove the vpc ticket was supposed to prevent this from happening along with other unsupported changes sometimes nacl changes are allowed sometimes blocked we need to go back through and re assess all blocks and make sure they are preventing all changes as originally intended we need to go back through and re assess all scoped overrides and make sure they overide the check as intended ov global options false ov del accts false ov ren accts false ov acct email false ov acct ou false ov acct vpc false ov acct subnet false ov tgw false ov mad false ov ou vpc false ov ou subnet false ov share to ou false ov share to accounts false ov nacl false we need to add any new blocks required since we did example fyi for the additionally overides issue proserve was doing a deployment today they turned on az d which fails due to nacl updates expected they then incremented the nacl id s by one i e becomes and the comparison blocks blocked the change they then set ov nacl true and it still failed but overridecomparison true succeeded just more evidence on the scoped overides not functioning properly error was errortype error errormessage there were errors while comparing the configuration changes nconfigcheck blocked changing config path organizational units dev vpc subnets nacls rule nconfigcheck blocked changing config path organizational units dev vpc subnets nacls rule trace var task index js at processticksandrejections internal process task queues js expected behavior accelerator fails with configcheck blocked changing error and should not continue as was a long time ago can we also take a quick review to make sure other blocks remain in place are their any additional new blocks we need to add actual behavior accelerator does not block customer from making unsupported vpc removal change | 0 |
30,988 | 5,892,424,859 | IssuesEvent | 2017-05-17 19:26:22 | ESMCI/cime | https://api.github.com/repos/ESMCI/cime | closed | Create clear and complete porting documentation | documentation ready | There is a new section on CIME internals that has been pushed to gh-pages that will help the porting documentation. The porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas. | 1.0 | Create clear and complete porting documentation - There is a new section on CIME internals that has been pushed to gh-pages that will help the porting documentation. The porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas. | non_priority | create clear and complete porting documentation there is a new section on cime internals that has been pushed to gh pages that will help the porting documentation the porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas | 0 |
580,847 | 17,268,444,277 | IssuesEvent | 2021-07-22 16:25:13 | mentortechabc/play | https://api.github.com/repos/mentortechabc/play | closed | Починить тесты: | priority:high | * Тесты должны быть зелеными
* Убедиться, что тесты работают из другой тайм-зоны
* После этого перед коммитом теперь обязательно прогоняем тесты, коммитим только изменения с зелеными тестами | 1.0 | Починить тесты: - * Тесты должны быть зелеными
* Убедиться, что тесты работают из другой тайм-зоны
* После этого перед коммитом теперь обязательно прогоняем тесты, коммитим только изменения с зелеными тестами | priority | починить тесты тесты должны быть зелеными убедиться что тесты работают из другой тайм зоны после этого перед коммитом теперь обязательно прогоняем тесты коммитим только изменения с зелеными тестами | 1 |
356,561 | 25,176,213,221 | IssuesEvent | 2022-11-11 09:29:19 | sheshenk/pe | https://api.github.com/repos/sheshenk/pe | opened | Invalid markdown syntax under design considerations | severity.VeryLow type.DocumentationBug | In the developer guide, there is some markdown that failed to render. You could cut the whitespaces after the bolding tag to resolve this!

<!--session: 1668157724467-d46740b4-59fe-4087-b248-d4a34ba06500-->
<!--Version: Web v3.4.4--> | 1.0 | Invalid markdown syntax under design considerations - In the developer guide, there is some markdown that failed to render. You could cut the whitespaces after the bolding tag to resolve this!

<!--session: 1668157724467-d46740b4-59fe-4087-b248-d4a34ba06500-->
<!--Version: Web v3.4.4--> | non_priority | invalid markdown syntax under design considerations in the developer guide there is some markdown that failed to render you could cut the whitespaces after the bolding tag to resolve this | 0 |
465,116 | 13,356,773,543 | IssuesEvent | 2020-08-31 08:43:14 | aau-network-security/haaukins | https://api.github.com/repos/aau-network-security/haaukins | closed | Integrate github actions into Haaukins project | enhancement low priority | **Is your feature request related to a problem? Please describe.**
Not exactly, will add ability to test and deploy all binaries with Github's CI.
- [x] Run test cases
- [x] Build binaries
- [ ] Release
- [ ] Deploy [could be] | 1.0 | Integrate github actions into Haaukins project - **Is your feature request related to a problem? Please describe.**
Not exactly, will add ability to test and deploy all binaries with Github's CI.
- [x] Run test cases
- [x] Build binaries
- [ ] Release
- [ ] Deploy [could be] | priority | integrate github actions into haaukins project is your feature request related to a problem please describe not exactly will add ability to test and deploy all binaries with github s ci run test cases build binaries release deploy | 1 |
798,175 | 28,238,724,619 | IssuesEvent | 2023-04-06 04:36:08 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Titanforged Reproduction | new feature :star: priority low :grey_exclamation: balance :balance_scale: cultural :mortar_board: racial:fish: | <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
## Titanforged should not reproduce biologically but rather be created.
- Assign infertile trait to all races with `flag = titanforged_class`
- Create constructible common wc building for titanforged cultures located in Mechagon, Uldum, Mogu'shan Vault and Ulduar.
- Enabled decision to create more titanforged characters as courtiers on a cooldown.
- Building level decreases cooldown, increases chance of creating more courtiers (e.g. random number between x and y) and courtiers with better stats or traits.
| 1.0 | Titanforged Reproduction - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
## Titanforged should not reproduce biologically but rather be created.
- Assign infertile trait to all races with `flag = titanforged_class`
- Create constructible common wc building for titanforged cultures located in Mechagon, Uldum, Mogu'shan Vault and Ulduar.
- Enabled decision to create more titanforged characters as courtiers on a cooldown.
- Building level decreases cooldown, increases chance of creating more courtiers (e.g. random number between x and y) and courtiers with better stats or traits.
| priority | titanforged reproduction do not remove pre existing lines titanforged should not reproduce biologically but rather be created assign infertile trait to all races with flag titanforged class create constructible common wc building for titanforged cultures located in mechagon uldum mogu shan vault and ulduar enabled decision to create more titanforged characters as courtiers on a cooldown building level decreases cooldown increases chance of creating more courtiers e g random number between x and y and courtiers with better stats or traits | 1 |
196,062 | 22,409,808,498 | IssuesEvent | 2022-06-18 14:51:24 | EmpoHQ/empo.im | https://api.github.com/repos/EmpoHQ/empo.im | opened | CVE-2022-0155 (Medium) detected in follow-redirects-1.14.1.tgz | security vulnerability | ## CVE-2022-0155 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- axios-5.13.1.tgz (Root Library)
- proxy-2.1.0.tgz
- http-proxy-middleware-1.3.1.tgz
- http-proxy-1.18.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EmpoHQ/empo.im/commit/488d7a9b0ad7df016dd006a95c3b06f4883d8561">488d7a9b0ad7df016dd006a95c3b06f4883d8561</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution (follow-redirects): 1.14.7</p>
<p>Direct dependency fix Resolution (@nuxtjs/axios): 5.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0155 (Medium) detected in follow-redirects-1.14.1.tgz - ## CVE-2022-0155 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- axios-5.13.1.tgz (Root Library)
- proxy-2.1.0.tgz
- http-proxy-middleware-1.3.1.tgz
- http-proxy-1.18.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EmpoHQ/empo.im/commit/488d7a9b0ad7df016dd006a95c3b06f4883d8561">488d7a9b0ad7df016dd006a95c3b06f4883d8561</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution (follow-redirects): 1.14.7</p>
<p>Direct dependency fix Resolution (@nuxtjs/axios): 5.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in follow redirects tgz cve medium severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy axios tgz root library proxy tgz http proxy middleware tgz http proxy tgz x follow redirects tgz vulnerable library found in head commit a href found in base branch main vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution nuxtjs axios step up your open source security game with mend | 0 |
528,834 | 15,375,386,463 | IssuesEvent | 2021-03-02 14:52:47 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Unify Instrument calibration - across tubes and other det types | Diffraction High Priority Stale | This issue was originally [TRAC 9303](http://trac.mantidproject.org/mantid/ticket/9303)
Hard to estimate without a review of what we have so far and what we are missing
---
Keywords: SSC,2014,All,Calibration
| 1.0 | Unify Instrument calibration - across tubes and other det types - This issue was originally [TRAC 9303](http://trac.mantidproject.org/mantid/ticket/9303)
Hard to estimate without a review of what we have so far and what we are missing
---
Keywords: SSC,2014,All,Calibration
| priority | unify instrument calibration across tubes and other det types this issue was originally hard to estimate without a review of what we have so far and what we are missing keywords ssc all calibration | 1 |
44,896 | 5,894,106,271 | IssuesEvent | 2017-05-18 00:22:43 | phetsims/circuit-construction-kit-dc | https://api.github.com/repos/phetsims/circuit-construction-kit-dc | closed | Controls for internal resistance of battery | design:general | In Java, the battery can have an internal resistance of up to 9 ohms. We have not yet discussed this feature, though I imagine it will be desirable for 1.0. Self-assigning to mock up a few options.
<img width="219" alt="screen shot 2017-03-23 at 4 29 14 pm" src="https://cloud.githubusercontent.com/assets/8419308/24273051/e111e7a6-0fe5-11e7-88f9-85818a6abf91.png"> <img width="219" alt="screen shot 2017-03-23 at 4 28 49 pm" src="https://cloud.githubusercontent.com/assets/8419308/24273052/e115862c-0fe5-11e7-9b18-514a7d7130ec.png">
| 1.0 | Controls for internal resistance of battery - In Java, the battery can have an internal resistance of up to 9 ohms. We have not yet discussed this feature, though I imagine it will be desirable for 1.0. Self-assigning to mock up a few options.
<img width="219" alt="screen shot 2017-03-23 at 4 29 14 pm" src="https://cloud.githubusercontent.com/assets/8419308/24273051/e111e7a6-0fe5-11e7-88f9-85818a6abf91.png"> <img width="219" alt="screen shot 2017-03-23 at 4 28 49 pm" src="https://cloud.githubusercontent.com/assets/8419308/24273052/e115862c-0fe5-11e7-9b18-514a7d7130ec.png">
| non_priority | controls for internal resistance of battery in java the battery can have an internal resistance of up to ohms we have not yet discussed this feature though i imagine it will be desirable for self assigning to mock up a few options img width alt screen shot at pm src img width alt screen shot at pm src | 0 |
405,623 | 11,879,878,584 | IssuesEvent | 2020-03-27 09:38:39 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Issues with market info on markets in reporting | Needed for V2 launch Priority: Medium | Images from build.
The Event expiration should be shown here with the arrow to expand to show more market details

Now when clicking the arrow you see two arrow. This shouldn't happen - only need the one arrow here

Clicking the arrow shows the spacing on the event expiration is broken:

Design:
https://www.figma.com/file/fLWVwmanAwetVZbujQquEi/Market-Page?node-id=449%3A953
If it's easier, show the event expiration and total volume as default but just make sure everything lines up
 | 1.0 | Issues with market info on markets in reporting - Images from build.
The Event expiration should be shown here with the arrow to expand to show more market details

Now when clicking the arrow you see two arrow. This shouldn't happen - only need the one arrow here

Clicking the arrow shows the spacing on the event expiration is broken:

Design:
https://www.figma.com/file/fLWVwmanAwetVZbujQquEi/Market-Page?node-id=449%3A953
If it's easier, show the event expiration and total volume as default but just make sure everything lines up
 | priority | issues with market info on markets in reporting images from build the event expiration should be shown here with the arrow to expand to show more market details now when clicking the arrow you see two arrow this shouldn t happen only need the one arrow here clicking the arrow shows the spacing on the event expiration is broken design if it s easier show the event expiration and total volume as default but just make sure everything lines up | 1 |
283,076 | 21,316,032,534 | IssuesEvent | 2022-04-16 09:38:04 | FTang21/pe | https://api.github.com/repos/FTang21/pe | opened | Missing the construction of Order object | severity.Low type.DocumentationBug | 
Order object should be shown to be created. This current visualization implies that the Order object had previously already been created.
<!--session: 1650096032233-d65dfecb-8fa3-4774-9bdf-a138f1813d03-->
<!--Version: Web v3.4.2--> | 1.0 | Missing the construction of Order object - 
Order object should be shown to be created. This current visualization implies that the Order object had previously already been created.
<!--session: 1650096032233-d65dfecb-8fa3-4774-9bdf-a138f1813d03-->
<!--Version: Web v3.4.2--> | non_priority | missing the construction of order object order object should be shown to be created this current visualization implies that the order object had previously already been created | 0 |
59,093 | 8,328,697,744 | IssuesEvent | 2018-09-27 02:17:01 | roboll/helmfile | https://api.github.com/repos/roboll/helmfile | closed | Implement absent for deletion of charts | documentation | I would like to be able to define `absent` as part of a release in the helmfile. Upon `helmfile sync` all releases with `absent: true` will be deleted.
```
releases:
- name: super-magician
namespace: default
chart: stable/prometheus-cloudwatch-exporter
version: 0.2.0
values:
- prometheus-cloudwatch-exporter/values.yaml
absent: true
``` | 1.0 | Implement absent for deletion of charts - I would like to be able to define `absent` as part of a release in the helmfile. Upon `helmfile sync` all releases with `absent: true` will be deleted.
```
releases:
- name: super-magician
namespace: default
chart: stable/prometheus-cloudwatch-exporter
version: 0.2.0
values:
- prometheus-cloudwatch-exporter/values.yaml
absent: true
``` | non_priority | implement absent for deletion of charts i would like to be able to define absent as part of a release in the helmfile upon helmfile sync all releases with absent true will be deleted releases name super magician namespace default chart stable prometheus cloudwatch exporter version values prometheus cloudwatch exporter values yaml absent true | 0 |
44,227 | 9,553,528,698 | IssuesEvent | 2019-05-02 19:30:12 | redhat-developer/vscode-java | https://api.github.com/repos/redhat-developer/vscode-java | closed | The quick fix label for generating getter and setter has an unnecessary ellipsis | bug code action | So if you have 2 fields without accessors, the quickfix to generate them has an ellipsis, yet no wizard is shown as you'd expect from the ellipsis.
<img width="378" alt="Screen Shot 2019-04-29 at 7 05 30 PM" src="https://user-images.githubusercontent.com/148698/56932428-c9dd7400-6ab1-11e9-86c2-5c2e5714ea88.png">
| 1.0 | The quick fix label for generating getter and setter has an unnecessary ellipsis - So if you have 2 fields without accessors, the quickfix to generate them has an ellipsis, yet no wizard is shown as you'd expect from the ellipsis.
<img width="378" alt="Screen Shot 2019-04-29 at 7 05 30 PM" src="https://user-images.githubusercontent.com/148698/56932428-c9dd7400-6ab1-11e9-86c2-5c2e5714ea88.png">
| non_priority | the quick fix label for generating getter and setter has an unnecessary ellipsis so if you have fields without accessors the quickfix to generate them has an ellipsis yet no wizard is shown as you d expect from the ellipsis img width alt screen shot at pm src | 0 |
14,457 | 4,933,052,198 | IssuesEvent | 2016-11-28 15:24:39 | serde-rs/serde | https://api.github.com/repos/serde-rs/serde | opened | Support big arrays | codegen enhancement | Servo does this to support big arrays in one of their proc macros:
https://github.com/servo/heapsize/blob/44e86d6d48a09c9cbc30a122bc8725b188d017b2/derive/lib.rs#L36-L41
Let's do the same but only if the size of the array exceeds our biggest builtin impl.
Thanks @nox. | 1.0 | Support big arrays - Servo does this to support big arrays in one of their proc macros:
https://github.com/servo/heapsize/blob/44e86d6d48a09c9cbc30a122bc8725b188d017b2/derive/lib.rs#L36-L41
Let's do the same but only if the size of the array exceeds our biggest builtin impl.
Thanks @nox. | non_priority | support big arrays servo does this to support big arrays in one of their proc macros let s do the same but only if the size of the array exceeds our biggest builtin impl thanks nox | 0 |
6,473 | 23,220,380,941 | IssuesEvent | 2022-08-02 17:36:27 | angular/dev-infra | https://api.github.com/repos/angular/dev-infra | opened | Defer GitHub releases until after NPM packages are published | domain: release automation | Action item from [requiem/doc/postmortem475066](http://requiem/doc/postmortem475066).
In the `14.1.0-rc.0` release, the NPM publish operation failed several times. However the GitHub release was published before NPM failed, so each attempt published a GitHub release which isn't actually usable and confuses people watching the repository. These phantom releases also need to be manually cleaned up afterwards.
We should consider deferring the GitHub release until after the NPM publish is successful. If the NPM publish does fail, we might want to print an error message that the GitHub release was _not_ posted, but that's probably less of a concern. | 1.0 | Defer GitHub releases until after NPM packages are published - Action item from [requiem/doc/postmortem475066](http://requiem/doc/postmortem475066).
In the `14.1.0-rc.0` release, the NPM publish operation failed several times. However the GitHub release was published before NPM failed, so each attempt published a GitHub release which isn't actually usable and confuses people watching the repository. These phantom releases also need to be manually cleaned up afterwards.
We should consider deferring the GitHub release until after the NPM publish is successful. If the NPM publish does fail, we might want to print an error message that the GitHub release was _not_ posted, but that's probably less of a concern. | non_priority | defer github releases until after npm packages are published action item from in the rc release the npm publish operation failed several times however the github release was published before npm failed so each attempt published a github release which isn t actually usable and confuses people watching the repository these phantom releases also need to be manually cleaned up afterwards we should consider deferring the github release until after the npm publish is successful if the npm publish does fail we might want to print an error message that the github release was not posted but that s probably less of a concern | 0 |
270,494 | 8,461,141,308 | IssuesEvent | 2018-10-22 20:53:59 | prometheus/prombench | https://api.github.com/repos/prometheus/prombench | opened | expose master and pr prometheus instances so we can run custom queries | help wanted priority: high | in most cases so far it would be usefull to run custom quiries
for example in https://github.com/prometheus/prometheus/pull/4768 and https://github.com/prometheus/prometheus/pull/4763 | 1.0 | expose master and pr prometheus instances so we can run custom queries - in most cases so far it would be usefull to run custom quiries
for example in https://github.com/prometheus/prometheus/pull/4768 and https://github.com/prometheus/prometheus/pull/4763 | priority | expose master and pr prometheus instances so we can run custom queries in most cases so far it would be usefull to run custom quiries for example in and | 1 |
437,601 | 30,604,993,301 | IssuesEvent | 2023-07-22 22:19:50 | Kitware/trame | https://api.github.com/repos/Kitware/trame | closed | Used by ... | documentation | The goal of this issue is to capture companies using trame that are willing to have their logo featured on our website.
Please provide:
- Short sentence on what you are using trame for.
- Which logo to use from your company (links).
- Sentence granting Kitware to feature your logo on our website as a user of trame.
Thanks for posting here, your approval and your will to link your brand with trame means the world to us. | 1.0 | Used by ... - The goal of this issue is to capture companies using trame that are willing to have their logo featured on our website.
Please provide:
- Short sentence on what you are using trame for.
- Which logo to use from your company (links).
- Sentence granting Kitware to feature your logo on our website as a user of trame.
Thanks for posting here, your approval and your will to link your brand with trame means the world to us. | non_priority | used by the goal of this issue is to capture companies using trame that are willing to have their logo featured on our website please provide short sentence on what you are using trame for which logo to use from your company links sentence granting kitware to feature your logo on our website as a user of trame thanks for posting here your approval and your will to link your brand with trame means the world to us | 0 |
411,343 | 12,017,132,890 | IssuesEvent | 2020-04-10 17:40:59 | Sarrasor/INNO-S20-SP | https://api.github.com/repos/Sarrasor/INNO-S20-SP | closed | US-TM-3. Client part | display feature high priority | As a technical manager, I want to utilize different file formats in my content, such as video, audio, images or rich-text, so that the knowledge can be conveyed in the most suitable form.
Add textual display feature on the client-side in the app | 1.0 | US-TM-3. Client part - As a technical manager, I want to utilize different file formats in my content, such as video, audio, images or rich-text, so that the knowledge can be conveyed in the most suitable form.
Add textual display feature on the client-side in the app | priority | us tm client part as a technical manager i want to utilize different file formats in my content such as video audio images or rich text so that the knowledge can be conveyed in the most suitable form add textual display feature on the client side in the app | 1 |
37,898 | 12,505,853,120 | IssuesEvent | 2020-06-02 11:30:49 | fieldenms/tg | https://api.github.com/repos/fieldenms/tg | closed | Entity Master: user-driven filtering to be applied to masters by default | Entity master P1 Pull request Security | ### Description
TG explicitly supports the concept of user-drivent filtering. And by providing the implementation and binding for contract `IFilter`, any EQL query can be automatically augmented to filter out the data that should not be accesible/visible to the current user.
Entity Centres apply user-driven filtering by default. Entity Masters do not, and that was not viewed as necessary because the only way to get to load an Entity Master with a specific entity instance is by using Entity Centers.
However, more recent developments such as the reference hierarchy will provide a way to navigate between entities of different type in a more exploratory fashion, within the used of Entity Centres. This in turn, could potentially open up a possibility for users that are permitted to use the reference hierarchy tool, to open masters for instances that should not be filtered out for those user based on the user-driven conditions.
This issue should cover changes required for Entity Centres to apply user-driven filtering by default when retrieving an entity instance to be displayed.
#### Implementation details
Instead of using reader function `findById*(id, fetchModel)`, entity producers should use `getEntity(qem)`, where `qem` is passed the filtering parameter as `true`.
### Expected outcome
User-driven filtering applied by default in Entity Masters. | True | Entity Master: user-driven filtering to be applied to masters by default - ### Description
TG explicitly supports the concept of user-drivent filtering. And by providing the implementation and binding for contract `IFilter`, any EQL query can be automatically augmented to filter out the data that should not be accesible/visible to the current user.
Entity Centres apply user-driven filtering by default. Entity Masters do not, and that was not viewed as necessary because the only way to get to load an Entity Master with a specific entity instance is by using Entity Centers.
However, more recent developments such as the reference hierarchy will provide a way to navigate between entities of different type in a more exploratory fashion, within the used of Entity Centres. This in turn, could potentially open up a possibility for users that are permitted to use the reference hierarchy tool, to open masters for instances that should not be filtered out for those user based on the user-driven conditions.
This issue should cover changes required for Entity Centres to apply user-driven filtering by default when retrieving an entity instance to be displayed.
#### Implementation details
Instead of using reader function `findById*(id, fetchModel)`, entity producers should use `getEntity(qem)`, where `qem` is passed the filtering parameter as `true`.
### Expected outcome
User-driven filtering applied by default in Entity Masters. | non_priority | entity master user driven filtering to be applied to masters by default description tg explicitly supports the concept of user drivent filtering and by providing the implementation and binding for contract ifilter any eql query can be automatically augmented to filter out the data that should not be accesible visible to the current user entity centres apply user driven filtering by default entity masters do not and that was not viewed as necessary because the only way to get to load an entity master with a specific entity instance is by using entity centers however more recent developments such as the reference hierarchy will provide a way to navigate between entities of different type in a more exploratory fashion within the used of entity centres this in turn could potentially open up a possibility for users that are permitted to use the reference hierarchy tool to open masters for instances that should not be filtered out for those user based on the user driven conditions this issue should cover changes required for entity centres to apply user driven filtering by default when retrieving an entity instance to be displayed implementation details instead of using reader function findbyid id fetchmodel entity producers should use getentity qem where qem is passed the filtering parameter as true expected outcome user driven filtering applied by default in entity masters | 0 |
24,130 | 12,221,737,356 | IssuesEvent | 2020-05-02 09:36:51 | base33/Mimic | https://api.github.com/repos/base33/Mimic | opened | Implement Factory Activator and benchmark comparing to current activator | Performance-enhancement | Implement the factory activator to create new instances detailed here https://github.com/Dotnet-Boxed/Framework/blob/master/Source/Boxed.Mapping/Factory.cs
Create benchmark tests and create comparisons between current and new implementation | True | Implement Factory Activator and benchmark comparing to current activator - Implement the factory activator to create new instances detailed here https://github.com/Dotnet-Boxed/Framework/blob/master/Source/Boxed.Mapping/Factory.cs
Create benchmark tests and create comparisons between current and new implementation | non_priority | implement factory activator and benchmark comparing to current activator implement the factory activator to create new instances detailed here create benchmark tests and create comparisons between current and new implementation | 0 |
166,043 | 14,018,524,281 | IssuesEvent | 2020-10-29 16:56:44 | HenrikBengtsson/parallelly | https://api.github.com/repos/HenrikBengtsson/parallelly | closed | DOCS: example with availableCores() - 1 | documentation | Using `availableCores() - 1` or `0.4 * availableCores()` does not always work because we might end up with zero cores.
Instead we need to use `max(1, availableCores() - 1)`.
Add this to the help or one of the examples. | 1.0 | DOCS: example with availableCores() - 1 - Using `availableCores() - 1` or `0.4 * availableCores()` does not always work because we might end up with zero cores.
Instead we need to use `max(1, availableCores() - 1)`.
Add this to the help or one of the examples. | non_priority | docs example with availablecores using availablecores or availablecores does not always work because we might end up with zero cores instead we need to use max availablecores add this to the help or one of the examples | 0 |
265,263 | 28,262,388,503 | IssuesEvent | 2023-04-07 01:16:56 | hshivhare67/platform_device_renesas_kernel_v4.19.72 | https://api.github.com/repos/hshivhare67/platform_device_renesas_kernel_v4.19.72 | closed | CVE-2019-19054 (Medium) detected in linuxlinux-4.19.279 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-19054 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/platform_device_renesas_kernel_v4.19.72/commit/3f00c931cf51848ec37b8817097db058fcc2f3f7">3f00c931cf51848ec37b8817097db058fcc2f3f7</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/pci/cx23885/cx23888-ir.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/pci/cx23885/cx23888-ir.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the cx23888_ir_probe() function in drivers/media/pci/cx23885/cx23888-ir.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption) by triggering kfifo_alloc() failures, aka CID-a7b2df76b42b.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19054>CVE-2019-19054</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19054">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19054</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19054 (Medium) detected in linuxlinux-4.19.279 - autoclosed - ## CVE-2019-19054 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/platform_device_renesas_kernel_v4.19.72/commit/3f00c931cf51848ec37b8817097db058fcc2f3f7">3f00c931cf51848ec37b8817097db058fcc2f3f7</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/pci/cx23885/cx23888-ir.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/pci/cx23885/cx23888-ir.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the cx23888_ir_probe() function in drivers/media/pci/cx23885/cx23888-ir.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption) by triggering kfifo_alloc() failures, aka CID-a7b2df76b42b.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19054>CVE-2019-19054</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19054">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19054</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers media pci ir c drivers media pci ir c vulnerability details a memory leak in the ir probe function in drivers media pci ir c in the linux kernel through allows attackers to cause a denial of service memory consumption by triggering kfifo alloc failures aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
555,202 | 16,449,477,351 | IssuesEvent | 2021-05-21 02:04:02 | mobigen/IRIS-BigData-Platform | https://api.github.com/repos/mobigen/IRIS-BigData-Platform | closed | 보고서/ 지도 - 범례의 최소값/최대값 표시 기능 | #IBP Priority: P2 Status: Backlog Type: Suggestion | ## 기능 요청 ##
{문제가 무엇인지에 대한 명확하고 간결한 설명 부탁드립니다.}
보고서/ 지도 - 범례의 최소값/최대값 표시 기능
## 원하는 솔루션 설명 ##
{ 원하는 기능에 대한 명확하고 간결한 설명 부탁드립니다 }
지도에서 범례를 표시할때,
히트맵의 최소값과 최대값의 실값이 함께 표시할 수 있었으면 좋겠습니다.
<현재 범례상태 >: 최소/최대값의 실값이 표시되지 않음
<img width="405" alt="스크린샷 2020-03-11 오전 11 41 38" src="https://user-images.githubusercontent.com/36151180/76488938-572b7c00-646a-11ea-972a-63c051357afe.png">
<태블로 범례 예시>: 최소/최대값의 실값이 표시됨

## 고려한 다른 대안 ##
{ 고려한 대체 솔루션이나 기능에 대한 명확하고 간결한 설명 부탁드립니다. }
기능은 최소/최대값 표현 여부를 옵션 형태로 선택하게할수 있었으면 좋겠습니다.
## 기타 ##
기능 요청에 대한 다른 의견 또는 스크린샷을 여기에 모두 추가 부탁드립니다.
| 1.0 | 보고서/ 지도 - 범례의 최소값/최대값 표시 기능 - ## 기능 요청 ##
{문제가 무엇인지에 대한 명확하고 간결한 설명 부탁드립니다.}
보고서/ 지도 - 범례의 최소값/최대값 표시 기능
## 원하는 솔루션 설명 ##
{ 원하는 기능에 대한 명확하고 간결한 설명 부탁드립니다 }
지도에서 범례를 표시할때,
히트맵의 최소값과 최대값의 실값이 함께 표시할 수 있었으면 좋겠습니다.
<현재 범례상태 >: 최소/최대값의 실값이 표시되지 않음
<img width="405" alt="스크린샷 2020-03-11 오전 11 41 38" src="https://user-images.githubusercontent.com/36151180/76488938-572b7c00-646a-11ea-972a-63c051357afe.png">
<태블로 범례 예시>: 최소/최대값의 실값이 표시됨

## 고려한 다른 대안 ##
{ 고려한 대체 솔루션이나 기능에 대한 명확하고 간결한 설명 부탁드립니다. }
기능은 최소/최대값 표현 여부를 옵션 형태로 선택하게할수 있었으면 좋겠습니다.
## 기타 ##
기능 요청에 대한 다른 의견 또는 스크린샷을 여기에 모두 추가 부탁드립니다.
| priority | 보고서 지도 범례의 최소값 최대값 표시 기능 기능 요청 문제가 무엇인지에 대한 명확하고 간결한 설명 부탁드립니다 보고서 지도 범례의 최소값 최대값 표시 기능 원하는 솔루션 설명 원하는 기능에 대한 명확하고 간결한 설명 부탁드립니다 지도에서 범례를 표시할때 히트맵의 최소값과 최대값의 실값이 함께 표시할 수 있었으면 좋겠습니다 최소 최대값의 실값이 표시되지 않음 img width alt 스크린샷 오전 src 최소 최대값의 실값이 표시됨 고려한 다른 대안 고려한 대체 솔루션이나 기능에 대한 명확하고 간결한 설명 부탁드립니다 기능은 최소 최대값 표현 여부를 옵션 형태로 선택하게할수 있었으면 좋겠습니다 기타 기능 요청에 대한 다른 의견 또는 스크린샷을 여기에 모두 추가 부탁드립니다 | 1 |
186,062 | 15,045,334,378 | IssuesEvent | 2021-02-03 05:12:18 | jsakaluk/dySEM | https://api.github.com/repos/jsakaluk/dySEM | opened | Add News | documentation | Can just briefly summarize what's good and the few exceptions to "stable" in the stable release | 1.0 | Add News - Can just briefly summarize what's good and the few exceptions to "stable" in the stable release | non_priority | add news can just briefly summarize what s good and the few exceptions to stable in the stable release | 0 |
221,406 | 24,622,653,666 | IssuesEvent | 2022-10-16 05:09:52 | paley777/spiunib | https://api.github.com/repos/paley777/spiunib | closed | jquery-1.11.0.min.js: 4 vulnerabilities (highest severity is: 6.1) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-11023](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
| [CVE-2020-11022](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jQuery - 3.5.0 | ❌ |
| [CVE-2019-11358](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jquery - 3.4.0 | ❌ |
| [CVE-2015-9251](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.7 | jquery-1.11.0.min.js | Direct | jQuery - 3.0.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11358</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: jquery - 3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2015-9251</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | jquery-1.11.0.min.js: 4 vulnerabilities (highest severity is: 6.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-11023](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
| [CVE-2020-11022](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jQuery - 3.5.0 | ❌ |
| [CVE-2019-11358](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | jquery-1.11.0.min.js | Direct | jquery - 3.4.0 | ❌ |
| [CVE-2015-9251](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.7 | jquery-1.11.0.min.js | Direct | jQuery - 3.0.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11358</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: jquery - 3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2015-9251</summary>
### Vulnerable Library - <b>jquery-1.11.0.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /vendor/ckeditor/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_priority | jquery min js vulnerabilities highest severity is vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file vendor ckeditor ckeditor samples old jquery html path to vulnerable library vendor ckeditor ckeditor samples old jquery html vulnerabilities cve severity cvss dependency type fixed in remediation available medium jquery min js direct jquery jquery rails medium jquery min js direct jquery medium jquery min js direct jquery low jquery min js direct jquery details cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file vendor ckeditor ckeditor samples old jquery html path to vulnerable library vendor ckeditor ckeditor samples old jquery html dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with mend cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file vendor ckeditor ckeditor samples old jquery html path to vulnerable library vendor ckeditor ckeditor samples old jquery html dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with mend cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file vendor ckeditor ckeditor samples old jquery html path to vulnerable library vendor ckeditor ckeditor samples old jquery html dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with mend cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file vendor ckeditor ckeditor samples old jquery html path to vulnerable library vendor ckeditor ckeditor samples old jquery html dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with mend | 0 |
30,213 | 14,477,464,454 | IssuesEvent | 2020-12-10 06:37:06 | seung-lab/cloud-volume | https://api.github.com/repos/seung-lab/cloud-volume | closed | pulling thousands of 2D brain slices | performance question | I’m trying pull thousands of two-dimensional brain image slices using CloudVolume, but it seems to be a prohibitively slow process. After eight hours, only 100 slices have been completed. Is there anyway of speeding them up?
_Note: The Bbox is [0, 0, 0],[33792, 25600, 13312]._
**Code:**
```
dir = "s3://private/directory"
vol = CloudVolume(dir, parallel=True, fill_missing=True)
for i in range(13312):
image = vol[:, :, i]
pkl.dump(image, open("pickles/image"+str(i)+".pkl", "wb"))
``` | True | pulling thousands of 2D brain slices - I’m trying pull thousands of two-dimensional brain image slices using CloudVolume, but it seems to be a prohibitively slow process. After eight hours, only 100 slices have been completed. Is there anyway of speeding them up?
_Note: The Bbox is [0, 0, 0],[33792, 25600, 13312]._
**Code:**
```
dir = "s3://private/directory"
vol = CloudVolume(dir, parallel=True, fill_missing=True)
for i in range(13312):
image = vol[:, :, i]
pkl.dump(image, open("pickles/image"+str(i)+".pkl", "wb"))
``` | non_priority | pulling thousands of brain slices i’m trying pull thousands of two dimensional brain image slices using cloudvolume but it seems to be a prohibitively slow process after eight hours only slices have been completed is there anyway of speeding them up note the bbox is code dir private directory vol cloudvolume dir parallel true fill missing true for i in range image vol pkl dump image open pickles image str i pkl wb | 0 |
770,564 | 27,045,256,058 | IssuesEvent | 2023-02-13 09:18:39 | Avaiga/taipy-studio-gui | https://api.github.com/repos/Avaiga/taipy-studio-gui | opened | Preview of Taipy GUI pages | ✨New feature 🟧 Priority: High | **What would that feature address**
Visual Studio Code comes with a preview facility when edition Markdown files.
It would be great to allow for previewing Taipy visual elements embedded into Markdown code.
***Description of the ideal solution***
While working on a Markdown file, pressing `Ctrl-Shift-V` bring a new view in VSCode, showing a preview of Taipy visual elements that were defined.
***Caveats***
The quality of the rendering may greatly vary depending on the technologies used.
- Do we want real, live elements?
- Do we want image placeholders?
**Acceptance Criteria**
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change
- [ ] Ensure any change is well documented
| 1.0 | Preview of Taipy GUI pages - **What would that feature address**
Visual Studio Code comes with a preview facility when edition Markdown files.
It would be great to allow for previewing Taipy visual elements embedded into Markdown code.
***Description of the ideal solution***
While working on a Markdown file, pressing `Ctrl-Shift-V` bring a new view in VSCode, showing a preview of Taipy visual elements that were defined.
***Caveats***
The quality of the rendering may greatly vary depending on the technologies used.
- Do we want real, live elements?
- Do we want image placeholders?
**Acceptance Criteria**
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change
- [ ] Ensure any change is well documented
| priority | preview of taipy gui pages what would that feature address visual studio code comes with a preview facility when edition markdown files it would be great to allow for previewing taipy visual elements embedded into markdown code description of the ideal solution while working on a markdown file pressing ctrl shift v bring a new view in vscode showing a preview of taipy visual elements that were defined caveats the quality of the rendering may greatly vary depending on the technologies used do we want real live elements do we want image placeholders acceptance criteria ensure new code is unit tested and check code coverage is at least propagate any change on the demos and run all of them to ensure there is no breaking change ensure any change is well documented | 1 |
511,933 | 14,885,186,574 | IssuesEvent | 2021-01-20 15:26:36 | comic/grand-challenge.org | https://api.github.com/repos/comic/grand-challenge.org | closed | Use Paginated DataTables for the Image List view in a Reader Study | area/reader-studies estimate/hours priority/p2 | **Problem**
In order to view the cases in a reader study, I need to sequentially scroll through pages one by one. When a reader study contains hundreds of images this is not doable anymore.
**Proposed solution**
Archives allow the option of searching through the list of cases/ images and also to jump ahead several pages when browsing through the images. It would be very helpful if this feature would also be available for reader studies.
**Additional context**
This improved case navigation would be most useful when there is also an option to select the cases and perform an action e.g. delete them all at once. There is an existing feature request for selecting and deleting multiple cases #1600 that could perhaps be combined with this one. | 1.0 | Use Paginated DataTables for the Image List view in a Reader Study - **Problem**
In order to view the cases in a reader study, I need to sequentially scroll through pages one by one. When a reader study contains hundreds of images this is not doable anymore.
**Proposed solution**
Archives allow the option of searching through the list of cases/ images and also to jump ahead several pages when browsing through the images. It would be very helpful if this feature would also be available for reader studies.
**Additional context**
This improved case navigation would be most useful when there is also an option to select the cases and perform an action e.g. delete them all at once. There is an existing feature request for selecting and deleting multiple cases #1600 that could perhaps be combined with this one. | priority | use paginated datatables for the image list view in a reader study problem in order to view the cases in a reader study i need to sequentially scroll through pages one by one when a reader study contains hundreds of images this is not doable anymore proposed solution archives allow the option of searching through the list of cases images and also to jump ahead several pages when browsing through the images it would be very helpful if this feature would also be available for reader studies additional context this improved case navigation would be most useful when there is also an option to select the cases and perform an action e g delete them all at once there is an existing feature request for selecting and deleting multiple cases that could perhaps be combined with this one | 1 |
4,072 | 10,552,476,500 | IssuesEvent | 2019-10-03 15:14:11 | dotnet/docs | https://api.github.com/repos/dotnet/docs | closed | Multuple IHostedService registration | :book: guide - .NET Microservices :books: Area - .NET Architecture Guide Source - Docs.ms | If i try to register two or more services, only one could work properly.
For example:
```
services.AddSingleton<IHostedService, ServiceA>();
services.AddSingleton<IHostedService, ServiceB>();
```
Implementations are simplest as possible:
```
public class ServiceA: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceA");
Thread.Sleep(2000);
}
}
}
```
and
```
public class ServiceB: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceB");
Thread.Sleep(1000);
}
}
}
```
In output getting messages only from ServiceA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d49a03c0-a844-26eb-48a5-33a612dd3ead
* Version Independent ID: 0707865f-9db7-0d71-42a5-bc1a1e89680a
* Content: [Implement background tasks in microservices with IHostedService and the BackgroundService class](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#feedback)
* Content Source: [docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md](https://github.com/dotnet/docs/blob/master/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @nishanil
* Microsoft Alias: **nanil** | 1.0 | Multuple IHostedService registration - If i try to register two or more services, only one could work properly.
For example:
```
services.AddSingleton<IHostedService, ServiceA>();
services.AddSingleton<IHostedService, ServiceB>();
```
Implementations are simplest as possible:
```
public class ServiceA: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceA");
Thread.Sleep(2000);
}
}
}
```
and
```
public class ServiceB: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceB");
Thread.Sleep(1000);
}
}
}
```
In output getting messages only from ServiceA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d49a03c0-a844-26eb-48a5-33a612dd3ead
* Version Independent ID: 0707865f-9db7-0d71-42a5-bc1a1e89680a
* Content: [Implement background tasks in microservices with IHostedService and the BackgroundService class](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#feedback)
* Content Source: [docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md](https://github.com/dotnet/docs/blob/master/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @nishanil
* Microsoft Alias: **nanil** | non_priority | multuple ihostedservice registration if i try to register two or more services only one could work properly for example services addsingleton services addsingleton implementations are simplest as possible public class servicea ihostedservice public task startasync cancellationtoken cancellationtoken dowork return task completedtask public task stopasync cancellationtoken cancellationtoken return task completedtask private void dowork while true console writeline servicea thread sleep and public class serviceb ihostedservice public task startasync cancellationtoken cancellationtoken dowork return task completedtask public task stopasync cancellationtoken cancellationtoken return task completedtask private void dowork while true console writeline serviceb thread sleep in output getting messages only from servicea document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product dotnet technology dotnet ebooks github login nishanil microsoft alias nanil | 0 |
1,356 | 2,511,929,010 | IssuesEvent | 2015-01-14 12:41:14 | transientskp/tkp | https://api.github.com/repos/transientskp/tkp | opened | Quality Control: is the full FoV imaged? | priority normal | Is the full field of view imaged?
This has been raised as an interesting test, as if the full field of view (FOV) has not been imaged we may want to image the full dataset. The imaged FOV information can be estimated using the number of pixels and the size of the pixels. (nx and ny are the number of pixels in the x and y directions and are in the image header). The full FOV can be obtained from a look-up table here: http://www.astron.nl/radio-observatory/astronomers/lofar-imaging-capabilities-sensitivity/lofar-imaging-capabilities/lofa (Table 1) or more accurately calculated using the two equations at the top of that webpage. However this requires the array configuration - these are also inputs to the theoretical noise calculation. So keep this information as user entries for now (in same way as for the theoretical noise calculation).
If nx*ny*(cellsize/3600)*(cellsize/3600) < FOV in square degrees then image is flagged with something like "not full FOV"
For this test the image is not rejected, just flagged. So here give "Full field of view not imaged. Imaged FoV=XXdegrees, Observed FoV=XXdegrees" in the quality rejection column.
However, there are some imaging complexities which need to be taken into account here (namely the AWImager circular images). See full discussion under issue #3802
@jbroderick1: One thing to watch out for on the webpage linked above: the first equation is wrong. For the HBA, the constant should be 1.02 (uniformly illuminated circular aperture) instead of 1.3. For the LBA, the value is 1.1 according to George. Apparently the original constant was found to be too large for the case of a LOFAR station, and hence the values in the table had to be updated.
moved from LOFAR issue tracker
https://support.astron.nl/lofar_issuetracker/issues/3969 | 1.0 | Quality Control: is the full FoV imaged? - Is the full field of view imaged?
This has been raised as an interesting test, as if the full field of view (FOV) has not been imaged we may want to image the full dataset. The imaged FOV information can be estimated using the number of pixels and the size of the pixels. (nx and ny are the number of pixels in the x and y directions and are in the image header). The full FOV can be obtained from a look-up table here: http://www.astron.nl/radio-observatory/astronomers/lofar-imaging-capabilities-sensitivity/lofar-imaging-capabilities/lofa (Table 1) or more accurately calculated using the two equations at the top of that webpage. However this requires the array configuration - these are also inputs to the theoretical noise calculation. So keep this information as user entries for now (in same way as for the theoretical noise calculation).
If nx*ny*(cellsize/3600)*(cellsize/3600) < FOV in square degrees then image is flagged with something like "not full FOV"
For this test the image is not rejected, just flagged. So here give "Full field of view not imaged. Imaged FoV=XXdegrees, Observed FoV=XXdegrees" in the quality rejection column.
However, there are some imaging complexities which need to be taken into account here (namely the AWImager circular images). See full discussion under issue #3802
@jbroderick1: One thing to watch out for on the webpage linked above: the first equation is wrong. For the HBA, the constant should be 1.02 (uniformly illuminated circular aperture) instead of 1.3. For the LBA, the value is 1.1 according to George. Apparently the original constant was found to be too large for the case of a LOFAR station, and hence the values in the table had to be updated.
moved from LOFAR issue tracker
https://support.astron.nl/lofar_issuetracker/issues/3969 | priority | quality control is the full fov imaged is the full field of view imaged this has been raised as an interesting test as if the full field of view fov has not been imaged we may want to image the full dataset the imaged fov information can be estimated using the number of pixels and the size of the pixels nx and ny are the number of pixels in the x and y directions and are in the image header the full fov can be obtained from a look up table here table or more accurately calculated using the two equations at the top of that webpage however this requires the array configuration these are also inputs to the theoretical noise calculation so keep this information as user entries for now in same way as for the theoretical noise calculation if nx ny cellsize cellsize fov in square degrees then image is flagged with something like not full fov for this test the image is not rejected just flagged so here give full field of view not imaged imaged fov xxdegrees observed fov xxdegrees in the quality rejection column however there are some imaging complexities which need to be taken into account here namely the awimager circular images see full discussion under issue one thing to watch out for on the webpage linked above the first equation is wrong for the hba the constant should be uniformly illuminated circular aperture instead of for the lba the value is according to george apparently the original constant was found to be too large for the case of a lofar station and hence the values in the table had to be updated moved from lofar issue tracker | 1 |
34,085 | 6,289,061,757 | IssuesEvent | 2017-07-19 18:22:25 | Microsoft/pxt | https://api.github.com/repos/Microsoft/pxt | closed | The reference of block "setAll" lack of "See Also" | bug cti team documentation | **Repro step:**
- Navigate to [https://makecode.adafruit.com/ ](url)
- Click "_?_" and choose "_Reference_"
- Choose "_light_" in "_See also_"
- Then click "_setAll_"
**Expected Result:**

**Actual Result:**
[additional information: click "_rgb_" and it have the same issue ]

| 1.0 | The reference of block "setAll" lack of "See Also" - **Repro step:**
- Navigate to [https://makecode.adafruit.com/ ](url)
- Click "_?_" and choose "_Reference_"
- Choose "_light_" in "_See also_"
- Then click "_setAll_"
**Expected Result:**

**Actual Result:**
[additional information: click "_rgb_" and it have the same issue ]

| non_priority | the reference of block setall lack of see also repro step navigate to url click and choose reference choose light in see also then click setall expected result actual result | 0 |
38,689 | 12,566,554,759 | IssuesEvent | 2020-06-08 11:25:59 | rammatzkvosky/cwa-website | https://api.github.com/repos/rammatzkvosky/cwa-website | opened | WS-2019-0027 (Medium) detected in marked-0.3.19.js | security vulnerability | ## WS-2019-0027 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/cwa-website/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: /cwa-website/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/cwa-website/commit/734e88edc370d7c84801cacf91820bf00ae2e2fa">734e88edc370d7c84801cacf91820bf00ae2e2fa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.17 and earlier of marked has Four regexes were vulnerable to catastrophic backtracking. This leaves markdown servers open to a potential REDOS attack.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d>WS-2019-0027</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d">https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.3.19","isTransitiveDependency":false,"dependencyTree":"marked:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.3.18"}],"vulnerabilityIdentifier":"WS-2019-0027","vulnerabilityDetails":"Versions 0.3.17 and earlier of marked has Four regexes were vulnerable to catastrophic backtracking. This leaves markdown servers open to a potential REDOS attack.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | True | WS-2019-0027 (Medium) detected in marked-0.3.19.js - ## WS-2019-0027 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/cwa-website/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: /cwa-website/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/cwa-website/commit/734e88edc370d7c84801cacf91820bf00ae2e2fa">734e88edc370d7c84801cacf91820bf00ae2e2fa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.17 and earlier of marked has Four regexes were vulnerable to catastrophic backtracking. This leaves markdown servers open to a potential REDOS attack.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d>WS-2019-0027</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d">https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.3.19","isTransitiveDependency":false,"dependencyTree":"marked:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.3.18"}],"vulnerabilityIdentifier":"WS-2019-0027","vulnerabilityDetails":"Versions 0.3.17 and earlier of marked has Four regexes were vulnerable to catastrophic backtracking. This leaves markdown servers open to a potential REDOS attack.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/b15e42b67cec9ded8505e9d68bb8741ad7a9590d","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | non_priority | ws medium detected in marked js ws medium severity vulnerability vulnerable library marked js a markdown parser built for speed library home page a href path to dependency file tmp ws scm cwa website node modules marked www demo html path to vulnerable library cwa website node modules marked www lib marked js dependency hierarchy x marked js vulnerable library found in head commit a href vulnerability details versions and earlier of marked has four regexes were vulnerable to catastrophic backtracking this leaves markdown servers open to a potential redos attack publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails versions and earlier of marked has four regexes were vulnerable to catastrophic backtracking this leaves markdown servers open to a potential redos attack vulnerabilityurl | 0 |
321,257 | 23,848,313,033 | IssuesEvent | 2022-09-06 15:37:17 | timescale/docs | https://api.github.com/repos/timescale/docs | closed | [Feedback] Page: /timescaledb/latest/how-to-guides/upgrades/upgrade-docker - documentation is out of date | bug documentation community | `docker inspect timescaledb --format='{{range .Mounts }}{{.Type}}{{end}}' `
doesnt work as "Mounts": [] is empty array when running
`docker inspect timescaledb`
I have run the command on both:
Docker version 20.10.12, build e91ed57 on Debian linux
Docker version 20.10.17, build 100c701 on windows 11
Having this existing version: timescale/timescaledb-ha:pg14.2-ts2.5.2-latest
So I believe documentation is out of date. | 1.0 | [Feedback] Page: /timescaledb/latest/how-to-guides/upgrades/upgrade-docker - documentation is out of date - `docker inspect timescaledb --format='{{range .Mounts }}{{.Type}}{{end}}' `
doesnt work as "Mounts": [] is empty array when running
`docker inspect timescaledb`
I have run the command on both:
Docker version 20.10.12, build e91ed57 on Debian linux
Docker version 20.10.17, build 100c701 on windows 11
Having this existing version: timescale/timescaledb-ha:pg14.2-ts2.5.2-latest
So I believe documentation is out of date. | non_priority | page timescaledb latest how to guides upgrades upgrade docker documentation is out of date docker inspect timescaledb format range mounts type end doesnt work as mounts is empty array when running docker inspect timescaledb i have run the command on both docker version build on debian linux docker version build on windows having this existing version timescale timescaledb ha latest so i believe documentation is out of date | 0 |
629,475 | 20,034,264,180 | IssuesEvent | 2022-02-02 10:10:58 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | coap: blockwise: context current does not match total size after transfer is completed | bug priority: low area: Networking area: Networking Clients has-pr | **Describe the bug**
After downloading a filed using coap block transfer, I noticed that after receiving the last block `ctx->current` does not match ctx->total size.
Example:
File size: 300
Block size: 256
1. Request block 0.
block current = 0.
2. Receive block 0, 256 bytes
block-> current = 256
3. Request block 1.
block current = 256.
4. Receive block 0, receive 44
block-> current = 256
5. Transfer completed
**To Reproduce**
Use coap client to receive a file and printout the values of current after the block is received.
**Expected behavior**
Once the whole file is received, the context shall be updated and `current` shall match `total_size`.
**Impact**
Medium
**Logs and console output**
Not available
**Environment (please complete the following information):**
zephyr 1.14
The issue is in the latest version, but it need to be confirmed
| 1.0 | coap: blockwise: context current does not match total size after transfer is completed - **Describe the bug**
After downloading a filed using coap block transfer, I noticed that after receiving the last block `ctx->current` does not match ctx->total size.
Example:
File size: 300
Block size: 256
1. Request block 0.
block current = 0.
2. Receive block 0, 256 bytes
block-> current = 256
3. Request block 1.
block current = 256.
4. Receive block 0, receive 44
block-> current = 256
5. Transfer completed
**To Reproduce**
Use coap client to receive a file and printout the values of current after the block is received.
**Expected behavior**
Once the whole file is received, the context shall be updated and `current` shall match `total_size`.
**Impact**
Medium
**Logs and console output**
Not available
**Environment (please complete the following information):**
zephyr 1.14
The issue is in the latest version, but it need to be confirmed
| priority | coap blockwise context current does not match total size after transfer is completed describe the bug after downloading a filed using coap block transfer i noticed that after receiving the last block ctx current does not match ctx total size example file size block size request block block current receive block bytes block current request block block current receive block receive block current transfer completed to reproduce use coap client to receive a file and printout the values of current after the block is received expected behavior once the whole file is received the context shall be updated and current shall match total size impact medium logs and console output not available environment please complete the following information zephyr the issue is in the latest version but it need to be confirmed | 1 |
56,886 | 7,010,258,606 | IssuesEvent | 2017-12-19 22:25:24 | EvictionLab/eviction-maps | https://api.github.com/repos/EvictionLab/eviction-maps | opened | Update presentation content and styling | design needed enhancement | Content to come towards the end of December, style mockups to come in beginning of January. | 1.0 | Update presentation content and styling - Content to come towards the end of December, style mockups to come in beginning of January. | non_priority | update presentation content and styling content to come towards the end of december style mockups to come in beginning of january | 0 |
60,567 | 17,023,460,223 | IssuesEvent | 2021-07-03 02:08:43 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | show barrier=citywall in zoom 14 + 15 | Component: mapnik Priority: major Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 8.03pm, Wednesday, 12th August 2009]**
as citywalls are usually very big and prominent (good for orientation) and there is usually just 1 (well sometimes more, in Roma there are at least 2), I suggest to render them more prominently in mapnik.
Currently the name is rendered until zoom 15, and the wall itself just until z16 (IMHO a bug that there is the name without the feature)(please see here for example)
http://www.openstreetmap.org/?mlat=41.87481&mlon=12.47738&zoom=17&layers=B000FTF | 1.0 | show barrier=citywall in zoom 14 + 15 - **[Submitted to the original trac issue database at 8.03pm, Wednesday, 12th August 2009]**
as citywalls are usually very big and prominent (good for orientation) and there is usually just 1 (well sometimes more, in Roma there are at least 2), I suggest to render them more prominently in mapnik.
Currently the name is rendered until zoom 15, and the wall itself just until z16 (IMHO a bug that there is the name without the feature)(please see here for example)
http://www.openstreetmap.org/?mlat=41.87481&mlon=12.47738&zoom=17&layers=B000FTF | non_priority | show barrier citywall in zoom as citywalls are usually very big and prominent good for orientation and there is usually just well sometimes more in roma there are at least i suggest to render them more prominently in mapnik currently the name is rendered until zoom and the wall itself just until imho a bug that there is the name without the feature please see here for example | 0 |
806,047 | 29,798,613,488 | IssuesEvent | 2023-06-16 06:05:49 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Speed up Wallet Panel Load Time | polish priority/P3 perf feature/web3/wallet OS/Desktop front-end-change | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Currently the `Wallet Panel` can takes a long time to `render`, the panel should be snappy and load instantly. We should investigate possible performance issues and ways to speed up the panel's load time by maybe lazy loading data so the panel can load instantly.
https://user-images.githubusercontent.com/40611140/167055340-a9b042df-1478-4252-a5eb-e3aa0b5263f1.mov
| 1.0 | Speed up Wallet Panel Load Time - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Currently the `Wallet Panel` can takes a long time to `render`, the panel should be snappy and load instantly. We should investigate possible performance issues and ways to speed up the panel's load time by maybe lazy loading data so the panel can load instantly.
https://user-images.githubusercontent.com/40611140/167055340-a9b042df-1478-4252-a5eb-e3aa0b5263f1.mov
| priority | speed up wallet panel load time have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description currently the wallet panel can takes a long time to render the panel should be snappy and load instantly we should investigate possible performance issues and ways to speed up the panel s load time by maybe lazy loading data so the panel can load instantly | 1 |
280,014 | 8,677,001,717 | IssuesEvent | 2018-11-30 15:38:15 | DemocraciaEnRed/leyesabiertas-web | https://api.github.com/repos/DemocraciaEnRed/leyesabiertas-web | closed | Evitar que se modifiquen estilos cuando un usuario no logueado da like a un comentario. | priority: high | - [x] Comentarios generales
- [x] Comentarios contextuales | 1.0 | Evitar que se modifiquen estilos cuando un usuario no logueado da like a un comentario. - - [x] Comentarios generales
- [x] Comentarios contextuales | priority | evitar que se modifiquen estilos cuando un usuario no logueado da like a un comentario comentarios generales comentarios contextuales | 1 |
232,986 | 18,941,056,557 | IssuesEvent | 2021-11-18 02:56:51 | reach4help/covidhelphub-frontend | https://api.github.com/repos/reach4help/covidhelphub-frontend | opened | Test project for accessibility and usability | refactor test | **What**: make sure all existing parts of the project are accessible and usable, and for any problems encountered, fix them or put up an issue
**Why**: we want to start establishing accessibility and usability standards for the project
**Implementation details**:
- Test that users can navigate by tabbing
- Test screen reader accessibility
- Follow accessibility/usability Wiki (#26)
- Fix any problems or put up new issues | 1.0 | Test project for accessibility and usability - **What**: make sure all existing parts of the project are accessible and usable, and for any problems encountered, fix them or put up an issue
**Why**: we want to start establishing accessibility and usability standards for the project
**Implementation details**:
- Test that users can navigate by tabbing
- Test screen reader accessibility
- Follow accessibility/usability Wiki (#26)
- Fix any problems or put up new issues | non_priority | test project for accessibility and usability what make sure all existing parts of the project are accessible and usable and for any problems encountered fix them or put up an issue why we want to start establishing accessibility and usability standards for the project implementation details test that users can navigate by tabbing test screen reader accessibility follow accessibility usability wiki fix any problems or put up new issues | 0 |
6,536 | 14,813,552,725 | IssuesEvent | 2021-01-14 02:21:52 | pmelchior/scarlet | https://api.github.com/repos/pmelchior/scarlet | closed | Variable box sizes and/or centers for scarlet | architecture feature | Currently box sizes and positions are determined at initialization. We know that this can be optimized by adjusting box sizes and centers on the fly with the current best estimate of the source morphologies. We used to have this capability and needed to switch it off with the new Adaprox optimizer.
Since then we have gain most of the capability back:
* The groundwork is laid by #158. The reason for getting rid of the box adjustments is that the optimizer stores multiple versions of the gradients internally (called `m`, `v`, and `vhat`). The warm-start PR allow us to control them as they are now attached to `Parameter`.
* Our improved code for `Box` from #165 allows us to easily extend all the image-like arrays.
* #168 fixed a massive memory issue. In the process, `MonotonicityConstraint` now makes a cache lookup based on the shape of the `morph` argument and computes the proximal operation for any shape it hasn't cached yet.
Missing still are
1. method to reshape the `_data`, `m`, `v`, `vhat` members of the `morph` parameter in `FactorizedComponent`
2. a decision criterion when to do so
3. control logic to run the decision and reshaping methods
Items 1 and 3 are straightforward. As we keep finding, 2. is not. We used to compute the average flux along the edges of the box and compare it to the noise level in the image. This get complicated with multiple observations. We don't store the effective noise level `bg_cutoff` from the initialization, and we normally only compute it for one `Observation`, which may not be the one that would benefit the most from growing the box.
Suggestions welcome!
| 1.0 | Variable box sizes and/or centers for scarlet - Currently box sizes and positions are determined at initialization. We know that this can be optimized by adjusting box sizes and centers on the fly with the current best estimate of the source morphologies. We used to have this capability and needed to switch it off with the new Adaprox optimizer.
Since then we have gain most of the capability back:
* The groundwork is laid by #158. The reason for getting rid of the box adjustments is that the optimizer stores multiple versions of the gradients internally (called `m`, `v`, and `vhat`). The warm-start PR allow us to control them as they are now attached to `Parameter`.
* Our improved code for `Box` from #165 allows us to easily extend all the image-like arrays.
* #168 fixed a massive memory issue. In the process, `MonotonicityConstraint` now makes a cache lookup based on the shape of the `morph` argument and computes the proximal operation for any shape it hasn't cached yet.
Missing still are
1. method to reshape the `_data`, `m`, `v`, `vhat` members of the `morph` parameter in `FactorizedComponent`
2. a decision criterion when to do so
3. control logic to run the decision and reshaping methods
Items 1 and 3 are straightforward. As we keep finding, 2. is not. We used to compute the average flux along the edges of the box and compare it to the noise level in the image. This get complicated with multiple observations. We don't store the effective noise level `bg_cutoff` from the initialization, and we normally only compute it for one `Observation`, which may not be the one that would benefit the most from growing the box.
Suggestions welcome!
| non_priority | variable box sizes and or centers for scarlet currently box sizes and positions are determined at initialization we know that this can be optimized by adjusting box sizes and centers on the fly with the current best estimate of the source morphologies we used to have this capability and needed to switch it off with the new adaprox optimizer since then we have gain most of the capability back the groundwork is laid by the reason for getting rid of the box adjustments is that the optimizer stores multiple versions of the gradients internally called m v and vhat the warm start pr allow us to control them as they are now attached to parameter our improved code for box from allows us to easily extend all the image like arrays fixed a massive memory issue in the process monotonicityconstraint now makes a cache lookup based on the shape of the morph argument and computes the proximal operation for any shape it hasn t cached yet missing still are method to reshape the data m v vhat members of the morph parameter in factorizedcomponent a decision criterion when to do so control logic to run the decision and reshaping methods items and are straightforward as we keep finding is not we used to compute the average flux along the edges of the box and compare it to the noise level in the image this get complicated with multiple observations we don t store the effective noise level bg cutoff from the initialization and we normally only compute it for one observation which may not be the one that would benefit the most from growing the box suggestions welcome | 0 |
788,632 | 27,759,313,195 | IssuesEvent | 2023-03-16 06:46:59 | devvsakib/power-the-web | https://api.github.com/repos/devvsakib/power-the-web | opened | [FEAT]: set width for blog page | enhancement help wanted [priority: high] | **Is your feature request related to a problem? Please describe.**
Blog has 100vw width. Please width according to other's page.

**Describe the solution you'd like**
Check other page has fixed width. Just copy and paste the class names to the blog page
<!-- Do not delete this -->
**IMPORTANT**
If it's your first Contribution to this project, please add your details to the Contributors file
File Path: `src/json/Contributors.json` and you have to **MUST** read the contributing guideline before you start. [Here](https://github.com/devvsakib/power-the-web/blob/main/CONTRIBUTING.md)
> If you do not add your details, your PR will be Close
| 1.0 | [FEAT]: set width for blog page - **Is your feature request related to a problem? Please describe.**
Blog has 100vw width. Please width according to other's page.

**Describe the solution you'd like**
Check other page has fixed width. Just copy and paste the class names to the blog page
<!-- Do not delete this -->
**IMPORTANT**
If it's your first Contribution to this project, please add your details to the Contributors file
File Path: `src/json/Contributors.json` and you have to **MUST** read the contributing guideline before you start. [Here](https://github.com/devvsakib/power-the-web/blob/main/CONTRIBUTING.md)
> If you do not add your details, your PR will be Close
| priority | set width for blog page is your feature request related to a problem please describe blog has width please width according to other s page describe the solution you d like check other page has fixed width just copy and paste the class names to the blog page important if it s your first contribution to this project please add your details to the contributors file file path src json contributors json and you have to must read the contributing guideline before you start if you do not add your details your pr will be close | 1 |
457,046 | 13,151,065,726 | IssuesEvent | 2020-08-09 14:55:15 | chrisjsewell/docutils | https://api.github.com/repos/chrisjsewell/docutils | opened | issue a warning when an image file does not exist [SF:patches:159] | open patches priority-5 |
author: bariumbitmap
created: 2019-07-24 20:45:25.381000
assigned: None
SF_url: https://sourceforge.net/p/docutils/patches/159
attachments:
- https://sourceforge.net/p/docutils/patches/159/attachment/0001-Issue-warning-if-image-file-does-not-exist.patch
rst2html does not issue a warning when an image file does not exist. For example, this will not produce a warning if the file "this-does-no-exist.png" does not exist:
~~~
.. image:: this-does-not-exist.png
~~~
I have attached a patch that issues a warning when file insertion is enabled and the image URI is a file URL or a path.
---
commenter: milde
posted: 2019-08-19 08:50:14.201000
title: #159 issue a warning when an image file does not exist
Ticket moved from /p/docutils/bugs/368/
---
commenter: milde
posted: 2019-08-19 08:59:57.956000
title: #159 issue a warning when an image file does not exist
Thank you for the report and patch.
There are, however two issues:
a) The warning might interfere with the workflow for some users as the referenced file may exist in the target directory or as relative URL of the target document.
b) the "file-insertion-enabled" setting is a safety setting for cases where external files become actually part of the generated document and should not be mixed with other uses. http://docutils.sourceforge.net/docs/user/config.html#file-insertion-enabled
As the use cases differ widely, I recommend checking the links in generated documents manually or in a document generation wrapper.
| 1.0 | issue a warning when an image file does not exist [SF:patches:159] -
author: bariumbitmap
created: 2019-07-24 20:45:25.381000
assigned: None
SF_url: https://sourceforge.net/p/docutils/patches/159
attachments:
- https://sourceforge.net/p/docutils/patches/159/attachment/0001-Issue-warning-if-image-file-does-not-exist.patch
rst2html does not issue a warning when an image file does not exist. For example, this will not produce a warning if the file "this-does-no-exist.png" does not exist:
~~~
.. image:: this-does-not-exist.png
~~~
I have attached a patch that issues a warning when file insertion is enabled and the image URI is a file URL or a path.
---
commenter: milde
posted: 2019-08-19 08:50:14.201000
title: #159 issue a warning when an image file does not exist
Ticket moved from /p/docutils/bugs/368/
---
commenter: milde
posted: 2019-08-19 08:59:57.956000
title: #159 issue a warning when an image file does not exist
Thank you for the report and patch.
There are, however two issues:
a) The warning might interfere with the workflow for some users as the referenced file may exist in the target directory or as relative URL of the target document.
b) the "file-insertion-enabled" setting is a safety setting for cases where external files become actually part of the generated document and should not be mixed with other uses. http://docutils.sourceforge.net/docs/user/config.html#file-insertion-enabled
As the use cases differ widely, I recommend checking the links in generated documents manually or in a document generation wrapper.
| priority | issue a warning when an image file does not exist author bariumbitmap created assigned none sf url attachments does not issue a warning when an image file does not exist for example this will not produce a warning if the file this does no exist png does not exist image this does not exist png i have attached a patch that issues a warning when file insertion is enabled and the image uri is a file url or a path commenter milde posted title issue a warning when an image file does not exist ticket moved from p docutils bugs commenter milde posted title issue a warning when an image file does not exist thank you for the report and patch there are however two issues a the warning might interfere with the workflow for some users as the referenced file may exist in the target directory or as relative url of the target document b the file insertion enabled setting is a safety setting for cases where external files become actually part of the generated document and should not be mixed with other uses as the use cases differ widely i recommend checking the links in generated documents manually or in a document generation wrapper | 1 |
72,369 | 24,081,401,760 | IssuesEvent | 2022-09-19 06:59:33 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | multisetAgg breaks on postgres with DSL.row() + aliasing multisetAgg field. | T: Defect C: Functionality P: Medium E: All Editions | ### Expected behavior
I wrote the following query in JOOQ:
```
ctx
.select(row(
USER_GROUP.ID,
USER_GROUP.DESCRIPTION,
multisetAgg(USER_ROLE.ROLE_NAME).convertFrom(r -> r.intoSet(Record1::value1)).as("roles")
)
)
.from(USER_GROUP)
.leftJoin(USER_ROLE).onKey()
.groupBy(USER_GROUP.ID, USER_GROUP.DESCRIPTION)
.fetch()
.forEach(System.out::println);
```
This query works fine using a h2 and a oracle database, but crashes in Postgres with the following exception:
```
Exception in thread "main" org.jooq.exception.DataAccessException: SQL [select row (JOOQTEST.USER_GROUP.ID, JOOQTEST.USER_GROUP.DESCRIPTION, roles) as nested from JOOQTEST.USER_GROUP left outer join JOOQTEST.USER_ROLE on JOOQTEST.USER_ROLE.USER_GROUP_ID = JOOQTEST.USER_GROUP.ID group by JOOQTEST.USER_GROUP.ID, JOOQTEST.USER_GROUP.DESCRIPTION]; ERROR: column "roles" does not exist
Hinweis: Perhaps you meant to reference the column "user_role.role_name".
Position: 86
at org.jooq_3.17.4.POSTGRES.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:3307)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:678)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:355)
at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:290)
at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2837)
at JooqTestQuery.main(scratch_142.java:46)
Caused by: org.postgresql.util.PSQLException: ERROR: column "roles" does not exist
Hinweis: Perhaps you meant to reference the column "user_role.role_name".
Position: 86
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:177)
at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219)
at org.jooq.impl.Tools.executeStatementAndGetFirstResultSet(Tools.java:4562)
at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:236)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:341)
... 3 more
Disconnected from the target VM, address: '127.0.0.1:51766', transport: 'socket'
Process finished with exit code 1
```
As soon as I either remove the `as("roles")` from my multisetAgg or the surrounding `row(....)` (which in my real application I use for a `.mapping(MyPOJO::new)` call, the query works in Postgres _and_ H2.
There seems to be some problem here that the rendered SQL from JOOQ simply renders the alias instead of the actual field/expression.
### Actual behavior
An exception is thrown, a proper result is expected.
### Steps to reproduce the problem
See above.
### jOOQ Version
JOOQ Professional 3.17.4
### Database product and version
PostgreSQL 10.6 with SqlDialect.POSTGRES, H2 1.4.200
### Java Version
OpenJDK 11
### OS Version
Windows 10 x64
### JDBC driver name and version (include name if unofficial driver)
org.postgresql:postgresql:42.5.0 | 1.0 | multisetAgg breaks on postgres with DSL.row() + aliasing multisetAgg field. - ### Expected behavior
I wrote the following query in JOOQ:
```
ctx
.select(row(
USER_GROUP.ID,
USER_GROUP.DESCRIPTION,
multisetAgg(USER_ROLE.ROLE_NAME).convertFrom(r -> r.intoSet(Record1::value1)).as("roles")
)
)
.from(USER_GROUP)
.leftJoin(USER_ROLE).onKey()
.groupBy(USER_GROUP.ID, USER_GROUP.DESCRIPTION)
.fetch()
.forEach(System.out::println);
```
This query works fine using a h2 and a oracle database, but crashes in Postgres with the following exception:
```
Exception in thread "main" org.jooq.exception.DataAccessException: SQL [select row (JOOQTEST.USER_GROUP.ID, JOOQTEST.USER_GROUP.DESCRIPTION, roles) as nested from JOOQTEST.USER_GROUP left outer join JOOQTEST.USER_ROLE on JOOQTEST.USER_ROLE.USER_GROUP_ID = JOOQTEST.USER_GROUP.ID group by JOOQTEST.USER_GROUP.ID, JOOQTEST.USER_GROUP.DESCRIPTION]; ERROR: column "roles" does not exist
Hinweis: Perhaps you meant to reference the column "user_role.role_name".
Position: 86
at org.jooq_3.17.4.POSTGRES.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:3307)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:678)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:355)
at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:290)
at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2837)
at JooqTestQuery.main(scratch_142.java:46)
Caused by: org.postgresql.util.PSQLException: ERROR: column "roles" does not exist
Hinweis: Perhaps you meant to reference the column "user_role.role_name".
Position: 86
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:177)
at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219)
at org.jooq.impl.Tools.executeStatementAndGetFirstResultSet(Tools.java:4562)
at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:236)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:341)
... 3 more
Disconnected from the target VM, address: '127.0.0.1:51766', transport: 'socket'
Process finished with exit code 1
```
As soon as I either remove the `as("roles")` from my multisetAgg or the surrounding `row(....)` (which in my real application I use for a `.mapping(MyPOJO::new)` call, the query works in Postgres _and_ H2.
There seems to be some problem here that the rendered SQL from JOOQ simply renders the alias instead of the actual field/expression.
### Actual behavior
An exception is thrown, a proper result is expected.
### Steps to reproduce the problem
See above.
### jOOQ Version
JOOQ Professional 3.17.4
### Database product and version
PostgreSQL 10.6 with SqlDialect.POSTGRES, H2 1.4.200
### Java Version
OpenJDK 11
### OS Version
Windows 10 x64
### JDBC driver name and version (include name if unofficial driver)
org.postgresql:postgresql:42.5.0 | non_priority | multisetagg breaks on postgres with dsl row aliasing multisetagg field expected behavior i wrote the following query in jooq ctx select row user group id user group description multisetagg user role role name convertfrom r r intoset as roles from user group leftjoin user role onkey groupby user group id user group description fetch foreach system out println this query works fine using a and a oracle database but crashes in postgres with the following exception exception in thread main org jooq exception dataaccessexception sql error column roles does not exist hinweis perhaps you meant to reference the column user role role name position at org jooq postgres debug unknown source at org jooq impl tools translate tools java at org jooq impl defaultexecutecontext sqlexception defaultexecutecontext java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractresultquery fetch abstractresultquery java at org jooq impl selectimpl fetch selectimpl java at jooqtestquery main scratch java caused by org postgresql util psqlexception error column roles does not exist hinweis perhaps you meant to reference the column user role role name position at org postgresql core queryexecutorimpl receiveerrorresponse queryexecutorimpl java at org postgresql core queryexecutorimpl processresults queryexecutorimpl java at org postgresql core queryexecutorimpl execute queryexecutorimpl java at org postgresql jdbc pgstatement executeinternal pgstatement java at org postgresql jdbc pgstatement execute pgstatement java at org postgresql jdbc pgpreparedstatement executewithflags pgpreparedstatement java at org postgresql jdbc pgpreparedstatement execute pgpreparedstatement java at org jooq tools jdbc defaultpreparedstatement execute defaultpreparedstatement java at org jooq impl tools executestatementandgetfirstresultset tools java at org jooq impl abstractresultquery execute abstractresultquery java at org jooq impl abstractquery execute abstractquery java more disconnected from the target vm address transport socket process finished with exit code as soon as i either remove the as roles from my multisetagg or the surrounding row which in my real application i use for a mapping mypojo new call the query works in postgres and there seems to be some problem here that the rendered sql from jooq simply renders the alias instead of the actual field expression actual behavior an exception is thrown a proper result is expected steps to reproduce the problem see above jooq version jooq professional database product and version postgresql with sqldialect postgres java version openjdk os version windows jdbc driver name and version include name if unofficial driver org postgresql postgresql | 0 |
52,679 | 6,650,272,816 | IssuesEvent | 2017-09-28 15:46:36 | unavia/MileageApp | https://api.github.com/repos/unavia/MileageApp | closed | Switch Login/Create Account activities to use a Tabbed Layout | authentication design enhancement | Using a Tabbed Layout for the Login/Create Account Activities may be more user-friendly than using two different pages. However, I am not sure what the final result will look like or how it will work. | 1.0 | Switch Login/Create Account activities to use a Tabbed Layout - Using a Tabbed Layout for the Login/Create Account Activities may be more user-friendly than using two different pages. However, I am not sure what the final result will look like or how it will work. | non_priority | switch login create account activities to use a tabbed layout using a tabbed layout for the login create account activities may be more user friendly than using two different pages however i am not sure what the final result will look like or how it will work | 0 |
146,941 | 19,476,087,527 | IssuesEvent | 2021-12-24 12:38:23 | sewace/PhaseShift | https://api.github.com/repos/sewace/PhaseShift | opened | CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz | security vulnerability | ## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: PhaseShift/package.json</p>
<p>Path to vulnerable library: PhaseShift/node_modules/nanomatch/node_modules/kind-of/package.json,PhaseShift/node_modules/randomatic/node_modules/kind-of/package.json,PhaseShift/node_modules/anymatch/node_modules/kind-of/package.json,PhaseShift/node_modules/define-property/node_modules/kind-of/package.json,PhaseShift/node_modules/readdirp/node_modules/kind-of/package.json,PhaseShift/node_modules/snapdragon-node/node_modules/kind-of/package.json,PhaseShift/node_modules/base/node_modules/kind-of/package.json,PhaseShift/node_modules/sane/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.59.8.tgz (Root Library)
- cli-1.11.2.tgz
- metro-core-0.51.1.tgz
- jest-haste-map-24.0.0-alpha.6.tgz
- micromatch-2.3.11.tgz
- braces-1.8.5.tgz
- expand-range-1.8.2.tgz
- fill-range-2.2.4.tgz
- randomatic-3.1.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sewace/PhaseShift/commit/4fed7911a2622b8cefac707597cba1816054a701">4fed7911a2622b8cefac707597cba1816054a701</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz - ## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: PhaseShift/package.json</p>
<p>Path to vulnerable library: PhaseShift/node_modules/nanomatch/node_modules/kind-of/package.json,PhaseShift/node_modules/randomatic/node_modules/kind-of/package.json,PhaseShift/node_modules/anymatch/node_modules/kind-of/package.json,PhaseShift/node_modules/define-property/node_modules/kind-of/package.json,PhaseShift/node_modules/readdirp/node_modules/kind-of/package.json,PhaseShift/node_modules/snapdragon-node/node_modules/kind-of/package.json,PhaseShift/node_modules/base/node_modules/kind-of/package.json,PhaseShift/node_modules/sane/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.59.8.tgz (Root Library)
- cli-1.11.2.tgz
- metro-core-0.51.1.tgz
- jest-haste-map-24.0.0-alpha.6.tgz
- micromatch-2.3.11.tgz
- braces-1.8.5.tgz
- expand-range-1.8.2.tgz
- fill-range-2.2.4.tgz
- randomatic-3.1.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sewace/PhaseShift/commit/4fed7911a2622b8cefac707597cba1816054a701">4fed7911a2622b8cefac707597cba1816054a701</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in kind of tgz cve high severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href path to dependency file phaseshift package json path to vulnerable library phaseshift node modules nanomatch node modules kind of package json phaseshift node modules randomatic node modules kind of package json phaseshift node modules anymatch node modules kind of package json phaseshift node modules define property node modules kind of package json phaseshift node modules readdirp node modules kind of package json phaseshift node modules snapdragon node node modules kind of package json phaseshift node modules base node modules kind of package json phaseshift node modules sane node modules kind of package json dependency hierarchy react native tgz root library cli tgz metro core tgz jest haste map alpha tgz micromatch tgz braces tgz expand range tgz fill range tgz randomatic tgz x kind of tgz vulnerable library found in head commit a href found in base branch master vulnerability details ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by constructor name symbol hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
521,422 | 15,109,367,170 | IssuesEvent | 2021-02-08 17:45:24 | kubernetes-sigs/cluster-api | https://api.github.com/repos/kubernetes-sigs/cluster-api | closed | [cabpk] Show the actual kubeadm configuration applied to a machine | area/bootstrap area/ux help wanted kind/feature priority/backlog | /kind feature
**Describe the solution you'd like**
The typical kubeadm configuration generated by CABPK is minimal, and kubeadm applies many defaults, the values of which are not observable in any CAPI objects. Today, my options are to inspect `/tmp/kubeadm.yaml` on the machine, or fetch the kubeadm configuration from the cluster's control plane.
**Anything else you would like to add:**
Being able to observe the actual configuration applied to the machine will improve the debugging experience.
| 1.0 | [cabpk] Show the actual kubeadm configuration applied to a machine - /kind feature
**Describe the solution you'd like**
The typical kubeadm configuration generated by CABPK is minimal, and kubeadm applies many defaults, the values of which are not observable in any CAPI objects. Today, my options are to inspect `/tmp/kubeadm.yaml` on the machine, or fetch the kubeadm configuration from the cluster's control plane.
**Anything else you would like to add:**
Being able to observe the actual configuration applied to the machine will improve the debugging experience.
| priority | show the actual kubeadm configuration applied to a machine kind feature describe the solution you d like the typical kubeadm configuration generated by cabpk is minimal and kubeadm applies many defaults the values of which are not observable in any capi objects today my options are to inspect tmp kubeadm yaml on the machine or fetch the kubeadm configuration from the cluster s control plane anything else you would like to add being able to observe the actual configuration applied to the machine will improve the debugging experience | 1 |
298,717 | 22,553,299,259 | IssuesEvent | 2022-06-27 08:02:39 | kayakapagan/slack-gif-application | https://api.github.com/repos/kayakapagan/slack-gif-application | closed | create a search gifs shortcut as an other use case | documentation enhancement | - [x] Create the shortcut
- [x] Ask channel with the search query
- [x] Add buttons to send each gif directly under each gif
- [x] Refactor the code and clean the comments
- [x] Update `README.md` accordingly | 1.0 | create a search gifs shortcut as an other use case - - [x] Create the shortcut
- [x] Ask channel with the search query
- [x] Add buttons to send each gif directly under each gif
- [x] Refactor the code and clean the comments
- [x] Update `README.md` accordingly | non_priority | create a search gifs shortcut as an other use case create the shortcut ask channel with the search query add buttons to send each gif directly under each gif refactor the code and clean the comments update readme md accordingly | 0 |
780,345 | 27,390,795,403 | IssuesEvent | 2023-02-28 16:08:02 | LewisTrundle/L4-Individual-Project | https://api.github.com/repos/LewisTrundle/L4-Individual-Project | closed | Create Virtual Lead | Priority 1 New Feature Functionality | ## Summary
This feature should allow the robot to keep up with the mid-point of a users phone.
This utilises acuro marker detection
## Priority
1
## Time Estimate
5 hours
## Extra Notes
| 1.0 | Create Virtual Lead - ## Summary
This feature should allow the robot to keep up with the mid-point of a users phone.
This utilises acuro marker detection
## Priority
1
## Time Estimate
5 hours
## Extra Notes
| priority | create virtual lead summary this feature should allow the robot to keep up with the mid point of a users phone this utilises acuro marker detection priority time estimate hours extra notes | 1 |
201,616 | 15,803,663,330 | IssuesEvent | 2021-04-03 15:13:49 | auroral-ui/hexo-theme-aurora | https://api.github.com/repos/auroral-ui/hexo-theme-aurora | closed | 请问 i18n是怎么设置的=.= | documentation question | en,cn 两个json加进去没有转换 是 menu.mydownloads
<img width="292" alt="截屏2021-03-31 下午7 08 35" src="https://user-images.githubusercontent.com/72391178/113135453-ba927280-9254-11eb-8ef0-229ba81ab127.png">
<img width="334" alt="截屏2021-03-31 下午7 08 09" src="https://user-images.githubusercontent.com/72391178/113135424-b36b6480-9254-11eb-96b1-c9900cf2085e.png">
<img width="446" alt="截屏2021-03-31 下午7 09 05" src="https://user-images.githubusercontent.com/72391178/113135523-cda54280-9254-11eb-970c-47730f76907b.png">
| 1.0 | 请问 i18n是怎么设置的=.= - en,cn 两个json加进去没有转换 是 menu.mydownloads
<img width="292" alt="截屏2021-03-31 下午7 08 35" src="https://user-images.githubusercontent.com/72391178/113135453-ba927280-9254-11eb-8ef0-229ba81ab127.png">
<img width="334" alt="截屏2021-03-31 下午7 08 09" src="https://user-images.githubusercontent.com/72391178/113135424-b36b6480-9254-11eb-96b1-c9900cf2085e.png">
<img width="446" alt="截屏2021-03-31 下午7 09 05" src="https://user-images.githubusercontent.com/72391178/113135523-cda54280-9254-11eb-970c-47730f76907b.png">
| non_priority | 请问 en cn 两个json加进去没有转换 是 menu mydownloads img width alt src img width alt src img width alt src | 0 |
4,397 | 3,368,144,585 | IssuesEvent | 2015-11-22 19:14:20 | jgirald/ES2015C | https://api.github.com/repos/jgirald/ES2015C | closed | Modelar y aplicar texturas entrada muralla persa (PWG) | Building Persian Team B Textures | **Descripción:** Modelar la estructura de la entrada de la muralla de la civilización persa y aplicar texturas
**Estimación de esfuerzo por horas:** 2h
**Definición de DONE:** La apariencia estructural debe de ser de una entrada, formando parte de las murallas (edificio de defensa) teniendo algun vinculo persa en cuanto a su estilo. Tiene que tener las misma anchura que la murallada.
**Responsable:** Arnau Sánchez, TeamB | 1.0 | Modelar y aplicar texturas entrada muralla persa (PWG) - **Descripción:** Modelar la estructura de la entrada de la muralla de la civilización persa y aplicar texturas
**Estimación de esfuerzo por horas:** 2h
**Definición de DONE:** La apariencia estructural debe de ser de una entrada, formando parte de las murallas (edificio de defensa) teniendo algun vinculo persa en cuanto a su estilo. Tiene que tener las misma anchura que la murallada.
**Responsable:** Arnau Sánchez, TeamB | non_priority | modelar y aplicar texturas entrada muralla persa pwg descripción modelar la estructura de la entrada de la muralla de la civilización persa y aplicar texturas estimación de esfuerzo por horas definición de done la apariencia estructural debe de ser de una entrada formando parte de las murallas edificio de defensa teniendo algun vinculo persa en cuanto a su estilo tiene que tener las misma anchura que la murallada responsable arnau sánchez teamb | 0 |
23,965 | 5,007,901,479 | IssuesEvent | 2016-12-12 17:58:55 | minio/minio | https://api.github.com/repos/minio/minio | closed | unable to do cloudberry backup because of "pathing" error? | documentation faq windows | <!--- Provide a general summary of the issue in the Title above -->
Could very well be user but here is the information. Any assistance will be helpful.
error code:
time="2016-11-25T21:41:05-06:00" level=error msg="Unable to create an object." cause="mkdir \\?\c:\backups\backups\testing\MBS-eb2917ef-a3bc-4e8e-8652-3a1cb2fdb7e8\CBB_HUNTING_WABBIT\C:: The filename, directory name, or volume label syntax is incorrect." source="[object-handlers.go:447:objectAPIHandlers.PutObjectHandler()]" stack="/home/harsha/mygo/src/github.com/minio/minio/cmd/fs-v1.go:465:fsObjects.PutObject :195:(*fsObjects).PutObject /home/harsha/mygo/src/github.com/minio/minio/cmd/object-handlers.go:433:objectAPIHandlers.PutObjectHandler /home/harsha/mygo/src/github.com/minio/minio/cmd/api-router.go:58:(objectAPIHandlers).PutObjectHandler-fm"
## Expected Behavior
Works properly as documented.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
Does not work with a particular set of files.
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
No solution yet.
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: latest minio release
* Environment name and version CloudBerry Explorer
* Operating System and version: Windows 10.
| 1.0 | unable to do cloudberry backup because of "pathing" error? - <!--- Provide a general summary of the issue in the Title above -->
Could very well be user but here is the information. Any assistance will be helpful.
error code:
time="2016-11-25T21:41:05-06:00" level=error msg="Unable to create an object." cause="mkdir \\?\c:\backups\backups\testing\MBS-eb2917ef-a3bc-4e8e-8652-3a1cb2fdb7e8\CBB_HUNTING_WABBIT\C:: The filename, directory name, or volume label syntax is incorrect." source="[object-handlers.go:447:objectAPIHandlers.PutObjectHandler()]" stack="/home/harsha/mygo/src/github.com/minio/minio/cmd/fs-v1.go:465:fsObjects.PutObject :195:(*fsObjects).PutObject /home/harsha/mygo/src/github.com/minio/minio/cmd/object-handlers.go:433:objectAPIHandlers.PutObjectHandler /home/harsha/mygo/src/github.com/minio/minio/cmd/api-router.go:58:(objectAPIHandlers).PutObjectHandler-fm"
## Expected Behavior
Works properly as documented.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
Does not work with a particular set of files.
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
No solution yet.
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: latest minio release
* Environment name and version CloudBerry Explorer
* Operating System and version: Windows 10.
| non_priority | unable to do cloudberry backup because of pathing error could very well be user but here is the information any assistance will be helpful error code time level error msg unable to create an object cause mkdir c backups backups testing mbs cbb hunting wabbit c the filename directory name or volume label syntax is incorrect source stack home harsha mygo src github com minio minio cmd fs go fsobjects putobject fsobjects putobject home harsha mygo src github com minio minio cmd object handlers go objectapihandlers putobjecthandler home harsha mygo src github com minio minio cmd api router go objectapihandlers putobjecthandler fm expected behavior works properly as documented current behavior does not work with a particular set of files possible solution no solution yet your environment version used latest minio release environment name and version cloudberry explorer operating system and version windows | 0 |
12,114 | 14,278,920,046 | IssuesEvent | 2020-11-23 00:52:30 | hime-ime/hime | https://api.github.com/repos/hime-ime/hime | closed | 在Gnome-Shell的Tray顯示HIME的ICON | compatibility gtk | 其實我自己已經解決了,但是希望可以把方法提供給其他用Gnome-Shell的User。如不合規定請刪除
安裝TopIcon 這個Gnome-Shell Extention就OK了。
問題是TopIcon只Support 3.10以上的Gnome-Shell
只好請自行升級Gnome-Shell到3.10以上的版本...
| True | 在Gnome-Shell的Tray顯示HIME的ICON - 其實我自己已經解決了,但是希望可以把方法提供給其他用Gnome-Shell的User。如不合規定請刪除
安裝TopIcon 這個Gnome-Shell Extention就OK了。
問題是TopIcon只Support 3.10以上的Gnome-Shell
只好請自行升級Gnome-Shell到3.10以上的版本...
| non_priority | 在gnome shell的tray顯示hime的icon 其實我自己已經解決了,但是希望可以把方法提供給其他用gnome shell的user。如不合規定請刪除 安裝topicon 這個gnome shell extention就ok了。 問題是topicon只support shell 只好請自行升級gnome | 0 |
195,091 | 6,902,671,536 | IssuesEvent | 2017-11-25 23:50:27 | FRCteam4909/BionicFramework | https://api.github.com/repos/FRCteam4909/BionicFramework | closed | Neopixels via I2C | Priority: High Status: Completed Type: Enhancement | [SLAVE_I2C.ino](https://github.com/FRCteam4909/BionicFramework/blob/master/arduino/SLAVE_I2C/SLAVE_I2C.ino) needs to updated to work with the NeoPixels.
These are the NeoPixels we're going to use on our bots: https://www.adafruit.com/product/1460 | 1.0 | Neopixels via I2C - [SLAVE_I2C.ino](https://github.com/FRCteam4909/BionicFramework/blob/master/arduino/SLAVE_I2C/SLAVE_I2C.ino) needs to updated to work with the NeoPixels.
These are the NeoPixels we're going to use on our bots: https://www.adafruit.com/product/1460 | priority | neopixels via needs to updated to work with the neopixels these are the neopixels we re going to use on our bots | 1 |
742,127 | 25,839,016,106 | IssuesEvent | 2022-12-12 22:12:24 | PikeNote/taskbar-groups-pike-beta | https://api.github.com/repos/PikeNote/taskbar-groups-pike-beta | closed | Re-evaluate Error Catching | bug high priority | Provide better error catching when something is not found.
Make sure no major exceptions are thrown and the program crashing or bugging out in ways they are not supposed to. | 1.0 | Re-evaluate Error Catching - Provide better error catching when something is not found.
Make sure no major exceptions are thrown and the program crashing or bugging out in ways they are not supposed to. | priority | re evaluate error catching provide better error catching when something is not found make sure no major exceptions are thrown and the program crashing or bugging out in ways they are not supposed to | 1 |
58,517 | 8,273,773,633 | IssuesEvent | 2018-09-17 07:31:42 | polkadot-js/api | https://api.github.com/repos/polkadot-js/api | opened | API documentation does not render correctly | documentation | https://github.com/polkadot-js/api/blob/master/docs/api/classes/api.md
Seems like there is a missing "```" somewhere. Example doesn't render as example, text renders as, well, text. | 1.0 | API documentation does not render correctly - https://github.com/polkadot-js/api/blob/master/docs/api/classes/api.md
Seems like there is a missing "```" somewhere. Example doesn't render as example, text renders as, well, text. | non_priority | api documentation does not render correctly seems like there is a missing somewhere example doesn t render as example text renders as well text | 0 |
178,807 | 14,679,901,272 | IssuesEvent | 2020-12-31 08:25:48 | ThatsNoMoon/highlights | https://api.github.com/repos/ThatsNoMoon/highlights | closed | Write documentation | documentation enhancement internal | Internal functions, structs, modules ect. should have inline documentation. | 1.0 | Write documentation - Internal functions, structs, modules ect. should have inline documentation. | non_priority | write documentation internal functions structs modules ect should have inline documentation | 0 |
33,313 | 12,214,369,862 | IssuesEvent | 2020-05-01 09:45:24 | olivialancaster/amplify-cli | https://api.github.com/repos/olivialancaster/amplify-cli | opened | CVE-2020-11022 (Medium) detected in jquery-1.7.2.min.js | security vulnerability | ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/amplify-cli/packages/graphql-versioned-transformer/node_modules/jmespath/index.html</p>
<p>Path to vulnerable library: /amplify-cli/packages/graphql-versioned-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/amplify-cli/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-http-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-transformers-e2e-tests/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-dynamodb-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-transformer-common/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-connection-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/amplify-provider-awscloudformation/node_modules/jmespath/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/olivialancaster/amplify-cli/commit/9f2ce9dfb5e03e6067b98a3da44bb4f8d80dd01d">9f2ce9dfb5e03e6067b98a3da44bb4f8d80dd01d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
| True | CVE-2020-11022 (Medium) detected in jquery-1.7.2.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/amplify-cli/packages/graphql-versioned-transformer/node_modules/jmespath/index.html</p>
<p>Path to vulnerable library: /amplify-cli/packages/graphql-versioned-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/amplify-cli/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-http-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-transformers-e2e-tests/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-dynamodb-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-transformer-common/node_modules/jmespath/index.html,/amplify-cli/packages/graphql-connection-transformer/node_modules/jmespath/index.html,/amplify-cli/packages/amplify-provider-awscloudformation/node_modules/jmespath/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/olivialancaster/amplify-cli/commit/9f2ce9dfb5e03e6067b98a3da44bb4f8d80dd01d">9f2ce9dfb5e03e6067b98a3da44bb4f8d80dd01d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm amplify cli packages graphql versioned transformer node modules jmespath index html path to vulnerable library amplify cli packages graphql versioned transformer node modules jmespath index html amplify cli packages amplify cli node modules jmespath index html amplify cli packages graphql http transformer node modules jmespath index html amplify cli packages graphql transformers tests node modules jmespath index html amplify cli packages graphql dynamodb transformer node modules jmespath index html amplify cli packages graphql transformer common node modules jmespath index html amplify cli packages graphql connection transformer node modules jmespath index html amplify cli packages amplify provider awscloudformation node modules jmespath index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery | 0 |
29,462 | 14,135,575,239 | IssuesEvent | 2020-11-10 02:03:27 | cybersemics/em | https://api.github.com/repos/cybersemics/em | closed | Staged local loading | performance | Large amounts of data (5,000+ thoughts) causes the app to load very slowly from IndexedDB. Instead of loading the data all at once (`loadLocalState`), load ~the data and render in stages~ only the thoughts that are within a few steps from the cursor. The `thoughtIndex` and `contextIndex` will then serve as a cache of thoughts rather than a complete index.
Thoughts should be loaded in two stages. The first stage is the most critical as it is the "time-till-first-load" that most affects the user's perception of the loading time.
1. Load `expanded` thoughts and their children.
2. Load children, grandchildren, and great grandchildren.
After the initial load, thoughts should be swapped out as follows:
- [ ] When the `cursor` moves, recalculate `expanded` and load thoughts as described above for the new `cursor`.
- [ ] Show “Loading...” in place of children that are still loading (also on startup).
- [ ] Delete least recently accessed thoughts when they exceed a configurable cache size. | True | Staged local loading - Large amounts of data (5,000+ thoughts) causes the app to load very slowly from IndexedDB. Instead of loading the data all at once (`loadLocalState`), load ~the data and render in stages~ only the thoughts that are within a few steps from the cursor. The `thoughtIndex` and `contextIndex` will then serve as a cache of thoughts rather than a complete index.
Thoughts should be loaded in two stages. The first stage is the most critical as it is the "time-till-first-load" that most affects the user's perception of the loading time.
1. Load `expanded` thoughts and their children.
2. Load children, grandchildren, and great grandchildren.
After the initial load, thoughts should be swapped out as follows:
- [ ] When the `cursor` moves, recalculate `expanded` and load thoughts as described above for the new `cursor`.
- [ ] Show “Loading...” in place of children that are still loading (also on startup).
- [ ] Delete least recently accessed thoughts when they exceed a configurable cache size. | non_priority | staged local loading large amounts of data thoughts causes the app to load very slowly from indexeddb instead of loading the data all at once loadlocalstate load the data and render in stages only the thoughts that are within a few steps from the cursor the thoughtindex and contextindex will then serve as a cache of thoughts rather than a complete index thoughts should be loaded in two stages the first stage is the most critical as it is the time till first load that most affects the user s perception of the loading time load expanded thoughts and their children load children grandchildren and great grandchildren after the initial load thoughts should be swapped out as follows when the cursor moves recalculate expanded and load thoughts as described above for the new cursor show “loading ” in place of children that are still loading also on startup delete least recently accessed thoughts when they exceed a configurable cache size | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.