Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
407,008
| 11,905,170,184
|
IssuesEvent
|
2020-03-30 18:05:52
|
INN/Google-Analytics-Popular-Posts
|
https://api.github.com/repos/INN/Google-Analytics-Popular-Posts
|
closed
|
Make sure that the posts returned by AnayticBridgePopularPosts are cached
|
priority: low type: improvement
|
If they aren't, then the cache should be in the class and the cache should be reset by the cron job.
|
1.0
|
Make sure that the posts returned by AnayticBridgePopularPosts are cached - If they aren't, then the cache should be in the class and the cache should be reset by the cron job.
|
non_process
|
make sure that the posts returned by anayticbridgepopularposts are cached if they aren t then the cache should be in the class and the cache should be reset by the cron job
| 0
|
67,988
| 14,894,532,746
|
IssuesEvent
|
2021-01-21 07:42:51
|
alexcloudstar/hacker-news-search
|
https://api.github.com/repos/alexcloudstar/hacker-news-search
|
opened
|
CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js
|
security vulnerability
|
## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: hacker-news-search/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: hacker-news-search/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alexcloudstar/hacker-news-search/commit/cdf4bb0cc21366568063832dbdb07badbcb825ab">cdf4bb0cc21366568063832dbdb07badbcb825ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: hacker-news-search/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: hacker-news-search/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alexcloudstar/hacker-news-search/commit/cdf4bb0cc21366568063832dbdb07badbcb825ab">cdf4bb0cc21366568063832dbdb07badbcb825ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file hacker news search node modules js attic test moment index html path to vulnerable library hacker news search node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
4,160
| 7,105,590,995
|
IssuesEvent
|
2018-01-16 14:11:22
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
closed
|
Option to delay updating citations in documents
|
Word Processor Integration
|
Moved from zotero/zotero-word-for-windows-integration#35
See discussion https://forums.zotero.org/discussion/64960/feature-suggestion-delay-updating-citations-in-documents
|
1.0
|
Option to delay updating citations in documents - Moved from zotero/zotero-word-for-windows-integration#35
See discussion https://forums.zotero.org/discussion/64960/feature-suggestion-delay-updating-citations-in-documents
|
process
|
option to delay updating citations in documents moved from zotero zotero word for windows integration see discussion
| 1
|
12,727
| 15,096,185,226
|
IssuesEvent
|
2021-02-07 14:08:59
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
[BUG] Holograms won't go away after removing shop
|
Bug Cannot Reproduce Help Wanted In Process Priority:Major
|
**Describe the bug**
When you remove a shop, the item/hologram comes back after a second or two
**To Reproduce**
Steps to reproduce the behavior:
1. Create a random shop
2. Remove the shop
3. The hologram comes back after a second or two
**Expected behavior**
The hologram shouldn't come back because the shop doesn't exist anymore.
**Screenshots**

**Paste link:**
It failed to create a link (api exceeded file size) and there is no paste file in the QuickShop folder.

**Additional context**
The version of QuickShop I'm using is 4.0.6.0

|
1.0
|
[BUG] Holograms won't go away after removing shop - **Describe the bug**
When you remove a shop, the item/hologram comes back after a second or two
**To Reproduce**
Steps to reproduce the behavior:
1. Create a random shop
2. Remove the shop
3. The hologram comes back after a second or two
**Expected behavior**
The hologram shouldn't come back because the shop doesn't exist anymore.
**Screenshots**

**Paste link:**
It failed to create a link (api exceeded file size) and there is no paste file in the QuickShop folder.

**Additional context**
The version of QuickShop I'm using is 4.0.6.0

|
process
|
holograms won t go away after removing shop describe the bug when you remove a shop the item hologram comes back after a second or two to reproduce steps to reproduce the behavior create a random shop remove the shop the hologram comes back after a second or two expected behavior the hologram shouldn t come back because the shop doesn t exist anymore screenshots paste link it failed to create a link api exceeded file size and there is no paste file in the quickshop folder additional context the version of quickshop i m using is
| 1
|
21,300
| 28,496,630,939
|
IssuesEvent
|
2023-04-18 14:39:06
|
inmanta/inmanta-core
|
https://api.github.com/repos/inmanta/inmanta-core
|
opened
|
install pyadr through Makefile
|
process
|
`pyadr` is currently part of `requirements.dev.txt` but it is never used by the CI and it doesn't seem to be actively maintained. To prevent it holding back other dependencies it was decided to drop it from the requirement file and install it from the Makefile directly instead. Other repos should be checked for `pyadr` in their requirements files.
Created as follow-up for inmanta/irt#1592
|
1.0
|
install pyadr through Makefile - `pyadr` is currently part of `requirements.dev.txt` but it is never used by the CI and it doesn't seem to be actively maintained. To prevent it holding back other dependencies it was decided to drop it from the requirement file and install it from the Makefile directly instead. Other repos should be checked for `pyadr` in their requirements files.
Created as follow-up for inmanta/irt#1592
|
process
|
install pyadr through makefile pyadr is currently part of requirements dev txt but it is never used by the ci and it doesn t seem to be actively maintained to prevent it holding back other dependencies it was decided to drop it from the requirement file and install it from the makefile directly instead other repos should be checked for pyadr in their requirements files created as follow up for inmanta irt
| 1
|
10,844
| 13,624,221,958
|
IssuesEvent
|
2020-09-24 07:43:38
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
r.rgb layers are not correctly generated
|
Bug Feedback Processing
|
I am trying to use r.rgb to split my raster into thee separate rasters (R,G,B bands).
Each time I try r.rgb I get the red error massage:
> The following layers were not correctly generated.
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif
> You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
Theres no difference whether I choose to save as temp file or not. I tried on two Computers with different RGB rasters, both Win 10. I think the problem is, that after importing the raster maps, it is not able to save them in the temp folder?
> Importing raster map <rast_5f6b63079109d2.red>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> ERROR: raster map <rast_5f6b63079109d2red> could not be found.
Help appreciated!
This is the complete log:
> Win 10
> QGIS version: 3.14.16-Pi
> QGIS code revision: df27394552
> Qt version: 5.11.2
> GDAL version: 3.0.4
> GEOS version: 3.8.1-CAPI-1.13.3
> PROJ version: Rel. 6.3.2, May 1st, 2020
> Processing algorithm…
> Algorithm 'r.rgb' starting…
> Input parameters:
> { 'GRASS_RASTER_FORMAT_META' : '', 'GRASS_RASTER_FORMAT_OPT' : '', 'GRASS_REGION_CELLSIZE_PARAMETER' : 0, 'GRASS_REGION_PARAMETER' : None, 'blue' : 'TEMPORARY_OUTPUT', 'green' : 'TEMPORARY_OUTPUT', 'input' : 'C:/Users/Felix/Desktop/RGB Test.tif', 'red' : 'TEMPORARY_OUTPUT' }
>
> g.proj -c proj4="+proj=utm +zone=32 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
> r.in.gdal input="C:\Users\Felix\Desktop\RGB Test.tif" output="rast_5f6b63079109d2" --overwrite -o
> g.region n=5666567.6913 s=5666536.8635 e=510504.1778 w=510443.5507 res=0.012541808026475658
> g.region raster=rast_5f6b63079109d2red
> r.out.gdal -t -m input="rast_5f6b63079109d2red" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> g.region raster=rast_5f6b63079109d2green
> r.out.gdal -t -m input="rast_5f6b63079109d2green" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> g.region raster=rast_5f6b63079109d2blue
> r.out.gdal -t -m input="rast_5f6b63079109d2blue" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> Starting GRASS GIS...
> WARNUNG: Sperren gleichzeitiger Zugriffe auf ein Mapset ist unter Windows nicht möglich.
> Cleaning up temporary files...
> Executing <C:\Users\Felix\AppData\Local\Temp\processing_hQOjSy\grassdata\grass_batch_job.cmd> ...
> C:\Users\Felix\Documents>chcp 1252 1>NUL
> C:\Users\Felix\Documents>g.proj -c proj4="+proj=utm +zone=32 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
> Die Standard Region wurde auf die neue Projektion aktualisiert. Wenn Sie aber mehrere Mapsets haben, sollten Sie `g.region -d` in jedem ausführen, um die Einstellungen von der Standardregion zu übernehmen.
> Projektionsinformationen aktualisiert
> C:\Users\Felix\Documents>r.in.gdal input="C:\Users\Felix\Desktop\RGB Test.tif" output="rast_5f6b63079109d2" --overwrite -o
> Übersteuere die Überprüfung der Projektion.
> Importing 4 raster bands...
> Importing raster map <rast_5f6b63079109d2.red>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.green>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.blue>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.alpha>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> C:\Users\Felix\Documents>g.region n=5666567.6913 s=5666536.8635 e=510504.1778 w=510443.5507 res=0.012541808026475658
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2red
> FEHLER: Rasterkarte <rast_5f6b63079109d2red> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2red" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2red> nicht gefunden.
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2green
> FEHLER: Rasterkarte <rast_5f6b63079109d2green> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2green" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2green> nicht gefunden.
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2blue
> FEHLER: Rasterkarte <rast_5f6b63079109d2blue> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2blue" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2blue> nicht gefunden.
> C:\Users\Felix\Documents>exit
> Execution of <C:\Users\Felix\AppData\Local\Temp\processing_hQOjSy\grassdata\grass_batch_job.cmd> finished.
> Cleaning up temporary files...
> Drücken Sie eine beliebige Taste . . .
> Execution completed in 2.49 seconds
> Results:
> {'blue': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
> 'green': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
> 'red': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>}
>
> Loading resulting layers
> The following layers were not correctly generated.
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif
> You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
|
1.0
|
r.rgb layers are not correctly generated - I am trying to use r.rgb to split my raster into thee separate rasters (R,G,B bands).
Each time I try r.rgb I get the red error massage:
> The following layers were not correctly generated.
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif
> You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
Theres no difference whether I choose to save as temp file or not. I tried on two Computers with different RGB rasters, both Win 10. I think the problem is, that after importing the raster maps, it is not able to save them in the temp folder?
> Importing raster map <rast_5f6b63079109d2.red>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> ERROR: raster map <rast_5f6b63079109d2red> could not be found.
Help appreciated!
This is the complete log:
> Win 10
> QGIS version: 3.14.16-Pi
> QGIS code revision: df27394552
> Qt version: 5.11.2
> GDAL version: 3.0.4
> GEOS version: 3.8.1-CAPI-1.13.3
> PROJ version: Rel. 6.3.2, May 1st, 2020
> Processing algorithm…
> Algorithm 'r.rgb' starting…
> Input parameters:
> { 'GRASS_RASTER_FORMAT_META' : '', 'GRASS_RASTER_FORMAT_OPT' : '', 'GRASS_REGION_CELLSIZE_PARAMETER' : 0, 'GRASS_REGION_PARAMETER' : None, 'blue' : 'TEMPORARY_OUTPUT', 'green' : 'TEMPORARY_OUTPUT', 'input' : 'C:/Users/Felix/Desktop/RGB Test.tif', 'red' : 'TEMPORARY_OUTPUT' }
>
> g.proj -c proj4="+proj=utm +zone=32 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
> r.in.gdal input="C:\Users\Felix\Desktop\RGB Test.tif" output="rast_5f6b63079109d2" --overwrite -o
> g.region n=5666567.6913 s=5666536.8635 e=510504.1778 w=510443.5507 res=0.012541808026475658
> g.region raster=rast_5f6b63079109d2red
> r.out.gdal -t -m input="rast_5f6b63079109d2red" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> g.region raster=rast_5f6b63079109d2green
> r.out.gdal -t -m input="rast_5f6b63079109d2green" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> g.region raster=rast_5f6b63079109d2blue
> r.out.gdal -t -m input="rast_5f6b63079109d2blue" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> Starting GRASS GIS...
> WARNUNG: Sperren gleichzeitiger Zugriffe auf ein Mapset ist unter Windows nicht möglich.
> Cleaning up temporary files...
> Executing <C:\Users\Felix\AppData\Local\Temp\processing_hQOjSy\grassdata\grass_batch_job.cmd> ...
> C:\Users\Felix\Documents>chcp 1252 1>NUL
> C:\Users\Felix\Documents>g.proj -c proj4="+proj=utm +zone=32 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
> Die Standard Region wurde auf die neue Projektion aktualisiert. Wenn Sie aber mehrere Mapsets haben, sollten Sie `g.region -d` in jedem ausführen, um die Einstellungen von der Standardregion zu übernehmen.
> Projektionsinformationen aktualisiert
> C:\Users\Felix\Documents>r.in.gdal input="C:\Users\Felix\Desktop\RGB Test.tif" output="rast_5f6b63079109d2" --overwrite -o
> Übersteuere die Überprüfung der Projektion.
> Importing 4 raster bands...
> Importing raster map <rast_5f6b63079109d2.red>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.green>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.blue>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> Importing raster map <rast_5f6b63079109d2.alpha>...
> 0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
> C:\Users\Felix\Documents>g.region n=5666567.6913 s=5666536.8635 e=510504.1778 w=510443.5507 res=0.012541808026475658
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2red
> FEHLER: Rasterkarte <rast_5f6b63079109d2red> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2red" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2red> nicht gefunden.
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2green
> FEHLER: Rasterkarte <rast_5f6b63079109d2green> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2green" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2green> nicht gefunden.
> C:\Users\Felix\Documents>g.region raster=rast_5f6b63079109d2blue
> FEHLER: Rasterkarte <rast_5f6b63079109d2blue> konnte nicht gefunden werden.
> C:\Users\Felix\Documents>r.out.gdal -t -m input="rast_5f6b63079109d2blue" output="C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
> FEHLER: Rasterkarte oder Gruppe <rast_5f6b63079109d2blue> nicht gefunden.
> C:\Users\Felix\Documents>exit
> Execution of <C:\Users\Felix\AppData\Local\Temp\processing_hQOjSy\grassdata\grass_batch_job.cmd> finished.
> Cleaning up temporary files...
> Drücken Sie eine beliebige Taste . . .
> Execution completed in 2.49 seconds
> Results:
> {'blue': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
> 'green': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
> 'red': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>}
>
> Loading resulting layers
> The following layers were not correctly generated.
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/a3a562fa8e1a423ca4c8698e923ca01a/red.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/c396aa3b10a84565a52ada1c53cb9751/green.tif
> • C:/Users/Felix/AppData/Local/Temp/processing_hQOjSy/eb24f00c75eb4813bfe7f10af7238d85/blue.tif
> You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
|
process
|
r rgb layers are not correctly generated i am trying to use r rgb to split my raster into thee separate rasters r g b bands each time i try r rgb i get the red error massage the following layers were not correctly generated • c users felix appdata local temp processing hqojsy red tif • c users felix appdata local temp processing hqojsy green tif • c users felix appdata local temp processing hqojsy blue tif you can check the log messages panel in qgis main window to find more information about the execution of the algorithm theres no difference whether i choose to save as temp file or not i tried on two computers with different rgb rasters both win i think the problem is that after importing the raster maps it is not able to save them in the temp folder importing raster map error raster map could not be found help appreciated this is the complete log win qgis version pi qgis code revision qt version gdal version geos version capi proj version rel may processing algorithm… algorithm r rgb starting… input parameters grass raster format meta grass raster format opt grass region cellsize parameter grass region parameter none blue temporary output green temporary output input c users felix desktop rgb test tif red temporary output g proj c proj utm zone ellps units m no defs r in gdal input c users felix desktop rgb test tif output rast overwrite o g region n s e w res g region raster rast r out gdal t m input rast output c users felix appdata local temp processing hqojsy red tif format gtiff createopt tfw yes compress lzw overwrite g region raster rast r out gdal t m input rast output c users felix appdata local temp processing hqojsy green tif format gtiff createopt tfw yes compress lzw overwrite g region raster rast r out gdal t m input rast output c users felix appdata local temp processing hqojsy blue tif format gtiff createopt tfw yes compress lzw overwrite starting grass gis warnung sperren gleichzeitiger zugriffe auf ein mapset ist unter windows nicht möglich cleaning up temporary files executing c users felix documents chcp nul c users felix documents g proj c proj utm zone ellps units m no defs die standard region wurde auf die neue projektion aktualisiert wenn sie aber mehrere mapsets haben sollten sie g region d in jedem ausführen um die einstellungen von der standardregion zu übernehmen projektionsinformationen aktualisiert c users felix documents r in gdal input c users felix desktop rgb test tif output rast overwrite o übersteuere die überprüfung der projektion importing raster bands importing raster map importing raster map importing raster map importing raster map c users felix documents g region n s e w res c users felix documents g region raster rast fehler rasterkarte konnte nicht gefunden werden c users felix documents r out gdal t m input rast output c users felix appdata local temp processing hqojsy red tif format gtiff createopt tfw yes compress lzw overwrite fehler rasterkarte oder gruppe nicht gefunden c users felix documents g region raster rast fehler rasterkarte konnte nicht gefunden werden c users felix documents r out gdal t m input rast output c users felix appdata local temp processing hqojsy green tif format gtiff createopt tfw yes compress lzw overwrite fehler rasterkarte oder gruppe nicht gefunden c users felix documents g region raster rast fehler rasterkarte konnte nicht gefunden werden c users felix documents r out gdal t m input rast output c users felix appdata local temp processing hqojsy blue tif format gtiff createopt tfw yes compress lzw overwrite fehler rasterkarte oder gruppe nicht gefunden c users felix documents exit execution of finished cleaning up temporary files drücken sie eine beliebige taste execution completed in seconds results blue green red loading resulting layers the following layers were not correctly generated • c users felix appdata local temp processing hqojsy red tif • c users felix appdata local temp processing hqojsy green tif • c users felix appdata local temp processing hqojsy blue tif you can check the log messages panel in qgis main window to find more information about the execution of the algorithm
| 1
|
401,681
| 11,795,865,512
|
IssuesEvent
|
2020-03-18 09:45:43
|
pravega/pravega
|
https://api.github.com/repos/pravega/pravega
|
opened
|
Failed transactions metric incorrectly reported
|
area/controller area/metrics area/transaction kind/bug priority/P2 version/0.7.1 version/0.8.0
|
**Problem description**
When the Controller is committing transactions against a segment that is scaling, the commit transaction attempt may fail with a `OperationNotAllowedException` and therefore the Controller will retry the commit again. This is an expected situation and it does not mean that the transaction commit has permanently failed, it will just be retried. However, we are reporting this situation as a failure in the failed transaction metrics. We need to report that a Transaction commit has failed only it the case that it has really failed, but not in this expected situation of concurrent segment scaling and transaction commit.
**Problem location**
`CommitRequestHandler`
**Suggestions for an improvement**
Only report failed transactions when they really fail.
|
1.0
|
Failed transactions metric incorrectly reported - **Problem description**
When the Controller is committing transactions against a segment that is scaling, the commit transaction attempt may fail with a `OperationNotAllowedException` and therefore the Controller will retry the commit again. This is an expected situation and it does not mean that the transaction commit has permanently failed, it will just be retried. However, we are reporting this situation as a failure in the failed transaction metrics. We need to report that a Transaction commit has failed only it the case that it has really failed, but not in this expected situation of concurrent segment scaling and transaction commit.
**Problem location**
`CommitRequestHandler`
**Suggestions for an improvement**
Only report failed transactions when they really fail.
|
non_process
|
failed transactions metric incorrectly reported problem description when the controller is committing transactions against a segment that is scaling the commit transaction attempt may fail with a operationnotallowedexception and therefore the controller will retry the commit again this is an expected situation and it does not mean that the transaction commit has permanently failed it will just be retried however we are reporting this situation as a failure in the failed transaction metrics we need to report that a transaction commit has failed only it the case that it has really failed but not in this expected situation of concurrent segment scaling and transaction commit problem location commitrequesthandler suggestions for an improvement only report failed transactions when they really fail
| 0
|
24,214
| 17,014,179,061
|
IssuesEvent
|
2021-07-02 09:36:18
|
kaitai-io/kaitai_struct
|
https://api.github.com/repos/kaitai-io/kaitai_struct
|
closed
|
Add link to xref.html in the format gallery
|
infrastructure
|
While digging in the generating code for the [format gallery](https://formats.kaitai.io/), I noticed that every time it generates page https://formats.kaitai.io/xref.html, which is quite a neat summary of all formats included with their licenses and cross-references. It's a pity that there isn't any link leading to it, so nobody can actually get there. So I think it makes sense to add one. Probably on the [format gallery homepage](https://formats.kaitai.io/).
|
1.0
|
Add link to xref.html in the format gallery - While digging in the generating code for the [format gallery](https://formats.kaitai.io/), I noticed that every time it generates page https://formats.kaitai.io/xref.html, which is quite a neat summary of all formats included with their licenses and cross-references. It's a pity that there isn't any link leading to it, so nobody can actually get there. So I think it makes sense to add one. Probably on the [format gallery homepage](https://formats.kaitai.io/).
|
non_process
|
add link to xref html in the format gallery while digging in the generating code for the i noticed that every time it generates page which is quite a neat summary of all formats included with their licenses and cross references it s a pity that there isn t any link leading to it so nobody can actually get there so i think it makes sense to add one probably on the
| 0
|
6,687
| 9,808,684,309
|
IssuesEvent
|
2019-06-12 16:07:44
|
EthVM/EthVM
|
https://api.github.com/repos/EthVM/EthVM
|
closed
|
Value too long on contract
|
bug project:processing
|
* **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
Current trace:
```
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: java.sql.BatchUpdateException: Batch entry 103 INSERT INTO \"erc20_metadata\" (\"address\",\"timestamp\",\"name\",\"symbol\",\"decimals\",\"total_supply\") VALUES ('0x493699e547bfff3668bd195ea528b33cdee522a1','2017-12-30 17:15:29+00'::timestamp,'BitProCoinV4','BPCV4 iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAYAAAAeP4ixAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAfdSURBVGhD7VlrbBRVFF4fqFGjAm13Zne73e7OFm2iRjFCxGjUGP3h45ePxBhNIMZoYmKMUHxQMZHu9sFDC1pSTWi7j64PxAeJCCm0CBGJMbV0t6WlpVaoUF597G6X2vE7d89dh9r4A7oDP/olJ5n73XvP+c6de+/MnbHMYAYzuLTQWZ7r0XXLZVzMgLhYpXozFy9txMrVRVG/7QkuZtBTWnBNrFz5oHPdnBuYMg8iuN/2UFtp8VVMCeiR4qsmGrVcLmbQWuacHfWruybfjf2l6rXgt7b78hYyZT7aqhxzMMqfT54SybC2lC8zwIh/BivlooAesVwBbku0XF3PVAYjkQJlqgHJGjp8yi0xv3ok5lPvZUokMhbWnuKiaINRH++osC5gSoASQN+RrgprHlMC6PtcqlF7mYvTBz1QlJMIex7Qa+bPYuoc0EhD1LG2VQ6NyhBx/1jIe1AvtVxOZSSxKepXksZpGKuwPo27qYMvY8qiN91/ZTLs3TAW9kaYmn6MhT13JkPa7mTI+/ZE3W3XMS3Qs6bgJhpZCN5H02W0XnNAjJ4MeZ6kBYxERyE4ys0tHVUOe8yvnEDyf7f5VCdx+jfqtejzLRI5QwMnGmYLaYFaDwL2JkJFDzEtgES+oBGGwFcoGSR9Fm23R8utL/LI/8hN6Q42pDl1K5WpPXxuoeRh74tG2UYq6F1AIoXQkLaYaUpkSToR9QhNIQjqw92b6KxQt7Do76hdh896N64nmHuJOEzDNZQE2o+PBufZiDMFCLpWBpaLmhaySEQkozyPRFupTd+H+U3GRKLlymbZjqZVKqwtxHT6m9rCfqA2poFGDcET6WS04URjUSG24BxDIj/IRE5vKmyTiZBwXJ8VZWwO5At34xtOQk8Gva+JAGYCgSNSAGzb/hrLrH8TUVPxgOeAEIfphTUxQIkgwdcNbZqSjVox1Us/qYhnPrs3Dwj8rBQgRGCKZETChjYVdsq6rip1j0iEHp5cj8X/lVwbZEholLZfdm8e9M0FNxlHE9eNUiTZUL2bdjdR11ed34S10Yw7E8u08SsB1PVl+oe9bezafGBED2eEhLxxel5IocP1nj9l3cBG10/gfkH9mKzvrLR9JevTponN4KKAghvFYKv9TQqNN3iOSZ4WPKbSKVlHdmidY7uxL3xVs1vzgeBVRjGHP7LvzCQS9AxJfrje3W1MguyP6vydxr6wlezWfGBqvWcUM1Dj3CWFGnnsYAPGJMiOb3TtMbbBdl3Cbs0HFugbRjHHa117hVC/OmzkE0FvZu1IO/lZ4S/ntil6ld2aj5H6oiWjAe2UtOO1BT9HfdZTWA9dRp4s6rcOijq2wVrnPmP9SENR5nXHVLQsy723ZVnO2O6SHH06jHw1L89dxO7NAwXdXTI3OZWo87JlcxMXJZFUcN5diaB2xDg9LsQSAQ3PHc+d7N48IImHDW+tF2zkCwv+QXZvHpqXzynGlGiH9U2Tte9dOucWdm8eWpbNfRzBp57v52nNJTmPsXvzQN+nDpRZT7StUvAgdOG54NYPVtmbqUzWvdqun/i0UFyT0TVxsnxwtb2F+gzUFKS5MuvgVF8gsw481Bbxw20cL4xinvesdXwtH3gDnxTgIadlHoB0ffQTZ6Z8aJ19s1gb6NtRoY4TF/XZ7mH35gEPvRAF71+fn5QLtrNS3SqFDta6/pPIYG1hpoy238t+/dWORJpXGti9OaAviwgsjqxDdZ70zhPSYjg8dUmhI/We/yRCnCxDNB28uqjvUJ07zeFkGfXb5nGY7AOCxch3r7H/Kkc1GXTXgBdfRzBV4sRNToS4jgpFjD590xpt8Hwq+3evtskjwLccJruQ36nokHRmk3uXFNK/wfEGC9F719n7iZsqkR7USa5vvXOp7A9fTfLg1e5TXuBw2UG0LO9WBBNvsh2V6kosVLk+erFm1kiBxza6xDE3EfSkJJcIeM4SR3WSi/rU1ZiSImnyhTv5vuD96hmKxWGnF50+uwPn7j/SgZQfx8LuzMcHPJXfQfADLGI8jlcO4nEO+V2Kjgfc4qsKptNJaiN4v/o7uJXSD9o8i7odwg9iUUwOPz2g3wcQH2VRrd2+2TcisPjESWf1vz523c51MNs2KQyig5LHdUjyGPntku/foN2BQ9WYqAtpm8k3+Faqo5gUm2VcOMRvMb8SgPMeGqV4xO1EcDFVYGsxsunPpbDeasebUjBGuETy8YbC5ZLv/dC5VPKxCnUxjswfEU8+yffBSls+fPbSdjztD0n6syRvNQJXiMBh75nhOneenA4IfjQeKnpGCk4EXI9IwSN1BY9KPt5Q9Aw4Pv4q28gHkpBnfPGbgWJRTLrOCvh7llgDCL4iWmZ10VbKgkuIS9d540c3WPOY1+maOO73LpJ/i3j5awG8WCsYnJN6xH0jh8se9IBzNu7Il7Bu+k+CKbeCBZ2m/yTJsBYmQajfwfNdJJJeV9qOtFgtLP6plKtD6b7K23qk+HqskUPkm2JwuOyDPmQ3lVquxNQ4zGJ8xGPUaScSO9nkROhOiLqQ1kptwZen65XD5MvUXwpG0Bd43Invsbucal9lm0u/5jDaKRKbCnvvm5wIcXxHUtSW+58mH3TNbi8eOj9QxN/X4WChFaO9G0KPT2zVrm4rzb0eG8B+MromDnfsGNq0UFtj30sS/7dlXpQzxwxmcL6wWP4BQzYeag6H8f4AAAAASUVORK5CYII=',4,'1000000000000') ON CONFLICT (\"address\") DO UPDATE SET \"timestamp\"=EXCLUDED.\"timestamp\",\"name\"=EXCLUDED.\"name\",\"symbol\"=EXCLUDED.\"symbol\",\"decimals\"=EXCLUDED.\"decimals\",\"total_supply\"=EXCLUDED.\"total_supply\" was aborted: ERROR: value too long for type character varying(512) Call getNextException to see other errors in the batch.\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\n\n\tat io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)\n\t... 10 more\nCaused by: java.sql.SQLException: java.sql.BatchUpdateException: Batch entry 103 INSERT INTO \"erc20_metadata\" (\"address\",\"timestamp\",\"name\",\"symbol\",\"decimals\",\"total_supply\") VALUES ('0x493699e547bfff3668bd195ea528b33cdee522a1','2017-12-30 17:15:29+00'::timestamp,'BitProCoinV4','BPCV4 iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAYAAAAeP4ixAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAfdSURBVGhD7VlrbBRVFF4fqFGjAm13Zne73e7OFm2iRjFCxGjUGP3h45ePxBhNIMZoYmKMUHxQMZHu9sFDC1pSTWi7j64PxAeJCCm0CBGJMbV0t6WlpVaoUF597G6X2vE7d89dh9r4A7oDP/olJ5n73XvP+c6de+/MnbHMYAYzuLTQWZ7r0XXLZVzMgLhYpXozFy9txMrVRVG/7QkuZtBTWnBNrFz5oHPdnBuYMg8iuN/2UFtp8VVMCeiR4qsmGrVcLmbQWuacHfWruybfjf2l6rXgt7b78hYyZT7aqhxzMMqfT54SybC2lC8zwIh/BivlooAesVwBbku0XF3PVAYjkQJlqgHJGjp8yi0xv3ok5lPvZUokMhbWnuKiaINRH++osC5gSoASQN+RrgprHlMC6PtcqlF7mYvTBz1QlJMIex7Qa+bPYuoc0EhD1LG2VQ6NyhBx/1jIe1AvtVxOZSSxKepXksZpGKuwPo27qYMvY8qiN91/ZTLs3TAW9kaYmn6MhT13JkPa7mTI+/ZE3W3XMS3Qs6bgJhpZCN5H02W0XnNAjJ4MeZ6kBYxERyE4ys0tHVUOe8yvnEDyf7f5VCdx+jfqtejzLRI5QwMnGmYLaYFaDwL2JkJFDzEtgES+oBGGwFcoGSR9Fm23R8utL/LI/8hN6Q42pDl1K5WpPXxuoeRh74tG2UYq6F1AIoXQkLaYaUpkSToR9QhNIQjqw92b6KxQt7Do76hdh896N64nmHuJOEzDNZQE2o+PBufZiDMFCLpWBpaLmhaySEQkozyPRFupTd+H+U3GRKLlymbZjqZVKqwtxHT6m9rCfqA2poFGDcET6WS04URjUSG24BxDIj/IRE5vKmyTiZBwXJ8VZWwO5At34xtOQk8Gva+JAGYCgSNSAGzb/hrLrH8TUVPxgOeAEIfphTUxQIkgwdcNbZqSjVox1Us/qYhnPrs3Dwj8rBQgRGCKZETChjYVdsq6rip1j0iEHp5cj8X/lVwbZEholLZfdm8e9M0FNxlHE9eNUiTZUL2bdjdR11ed34S10Yw7E8u08SsB1PVl+oe9bezafGBED2eEhLxxel5IocP1nj9l3cBG10/gfkH9mKzvrLR9JevTponN4KKAghvFYKv9TQqNN3iOSZ4WPKbSKVlHdmidY7uxL3xVs1vzgeBVRjGHP7LvzCQS9AxJfrje3W1MguyP6vydxr6wlezWfGBqvWcUM1Dj3CWFGnnsYAPGJMiOb3TtMbbBdl3Cbs0HFugbRjHHa117hVC/OmzkE0FvZu1IO/lZ4S/ntil6ld2aj5H6oiWjAe2UtOO1BT9HfdZTWA9dRp4s6rcOijq2wVrnPmP9SENR5nXHVLQsy723ZVnO2O6SHH06jHw1L89dxO7NAwXdXTI3OZWo87JlcxMXJZFUcN5diaB2xDg9LsQSAQ3PHc+d7N48IImHDW+tF2zkCwv+QXZvHpqXzynGlGiH9U2Tte9dOucWdm8eWpbNfRzBp57v52nNJTmPsXvzQN+nDpRZT7StUvAgdOG54NYPVtmbqUzWvdqun/i0UFyT0TVxsnxwtb2F+gzUFKS5MuvgVF8gsw481Bbxw20cL4xinvesdXwtH3gDnxTgIadlHoB0ffQTZ6Z8aJ19s1gb6NtRoY4TF/XZ7mH35gEPvRAF71+fn5QLtrNS3SqFDta6/pPIYG1hpoy238t+/dWORJpXGti9OaAviwgsjqxDdZ70zhPSYjg8dUmhI/We/yRCnCxDNB28uqjvUJ07zeFkGfXb5nGY7AOCxch3r7H/Kkc1GXTXgBdfRzBV4sRNToS4jgpFjD590xpt8Hwq+3evtskjwLccJruQ36nokHRmk3uXFNK/wfEGC9F719n7iZsqkR7USa5vvXOp7A9fTfLg1e5TXuBw2UG0LO9WBBNvsh2V6kosVLk+erFm1kiBxza6xDE3EfSkJJcIeM4SR3WSi/rU1ZiSImnyhTv5vuD96hmKxWGnF50+uwPn7j/SgZQfx8LuzMcHPJXfQfADLGI8jlcO4nEO+V2Kjgfc4qsKptNJaiN4v/o7uJXSD9o8i7odwg9iUUwOPz2g3wcQH2VRrd2+2TcisPjESWf1vz523c51MNs2KQyig5LHdUjyGPntku/foN2BQ9WYqAtpm8k3+Faqo5gUm2VcOMRvMb8SgPMeGqV4xO1EcDFVYGsxsunPpbDeasebUjBGuETy8YbC5ZLv/dC5VPKxCnUxjswfEU8+yffBSls+fPbSdjztD0n6syRvNQJXiMBh75nhOneenA4IfjQeKnpGCk4EXI9IwSN1BY9KPt5Q9Aw4Pv4q28gHkpBnfPGbgWJRTLrOCvh7llgDCL4iWmZ10VbKgkuIS9d540c3WPOY1+maOO73LpJ/i3j5awG8WCsYnJN6xH0jh8se9IBzNu7Il7Bu+k+CKbeCBZ2m/yTJsBYmQajfwfNdJJJeV9qOtFgtLP6plKtD6b7K23qk+HqskUPkm2JwuOyDPmQ3lVquxNQ4zGJ8xGPUaScSO9nkROhOiLqQ1kptwZen65XD5MvUXwpG0Bd43Invsbucal9lm0u/5jDaKRKbCnvvm5wIcXxHUtSW+58mH3TNbi8eOj9QxN/X4WChFaO9G0KPT2zVrm4rzb0eG8B+MromDnfsGNq0UFtj30sS/7dlXpQzxwxmcL6wWP4BQzYeag6H8f4AAAAASUVORK5CYII=',4,'1000000000000') ON CONFLICT (\"address\") DO UPDATE SET \"timestamp\"=EXCLUDED.\"timestamp\",\"name\"=EXCLUDED.\"name\",\"symbol\"=EXCLUDED.\"symbol\",\"decimals\"=EXCLUDED.\"decimals\",\"total_supply\"=EXCLUDED.\"total_supply\" was aborted: ERROR: value too long for type character varying(512) Call getNextException to see other errors in the batch.\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\n\n\t... 12 more\n
```
|
1.0
|
Value too long on contract - * **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
Current trace:
```
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: java.sql.BatchUpdateException: Batch entry 103 INSERT INTO \"erc20_metadata\" (\"address\",\"timestamp\",\"name\",\"symbol\",\"decimals\",\"total_supply\") VALUES ('0x493699e547bfff3668bd195ea528b33cdee522a1','2017-12-30 17:15:29+00'::timestamp,'BitProCoinV4','BPCV4 iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAYAAAAeP4ixAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAfdSURBVGhD7VlrbBRVFF4fqFGjAm13Zne73e7OFm2iRjFCxGjUGP3h45ePxBhNIMZoYmKMUHxQMZHu9sFDC1pSTWi7j64PxAeJCCm0CBGJMbV0t6WlpVaoUF597G6X2vE7d89dh9r4A7oDP/olJ5n73XvP+c6de+/MnbHMYAYzuLTQWZ7r0XXLZVzMgLhYpXozFy9txMrVRVG/7QkuZtBTWnBNrFz5oHPdnBuYMg8iuN/2UFtp8VVMCeiR4qsmGrVcLmbQWuacHfWruybfjf2l6rXgt7b78hYyZT7aqhxzMMqfT54SybC2lC8zwIh/BivlooAesVwBbku0XF3PVAYjkQJlqgHJGjp8yi0xv3ok5lPvZUokMhbWnuKiaINRH++osC5gSoASQN+RrgprHlMC6PtcqlF7mYvTBz1QlJMIex7Qa+bPYuoc0EhD1LG2VQ6NyhBx/1jIe1AvtVxOZSSxKepXksZpGKuwPo27qYMvY8qiN91/ZTLs3TAW9kaYmn6MhT13JkPa7mTI+/ZE3W3XMS3Qs6bgJhpZCN5H02W0XnNAjJ4MeZ6kBYxERyE4ys0tHVUOe8yvnEDyf7f5VCdx+jfqtejzLRI5QwMnGmYLaYFaDwL2JkJFDzEtgES+oBGGwFcoGSR9Fm23R8utL/LI/8hN6Q42pDl1K5WpPXxuoeRh74tG2UYq6F1AIoXQkLaYaUpkSToR9QhNIQjqw92b6KxQt7Do76hdh896N64nmHuJOEzDNZQE2o+PBufZiDMFCLpWBpaLmhaySEQkozyPRFupTd+H+U3GRKLlymbZjqZVKqwtxHT6m9rCfqA2poFGDcET6WS04URjUSG24BxDIj/IRE5vKmyTiZBwXJ8VZWwO5At34xtOQk8Gva+JAGYCgSNSAGzb/hrLrH8TUVPxgOeAEIfphTUxQIkgwdcNbZqSjVox1Us/qYhnPrs3Dwj8rBQgRGCKZETChjYVdsq6rip1j0iEHp5cj8X/lVwbZEholLZfdm8e9M0FNxlHE9eNUiTZUL2bdjdR11ed34S10Yw7E8u08SsB1PVl+oe9bezafGBED2eEhLxxel5IocP1nj9l3cBG10/gfkH9mKzvrLR9JevTponN4KKAghvFYKv9TQqNN3iOSZ4WPKbSKVlHdmidY7uxL3xVs1vzgeBVRjGHP7LvzCQS9AxJfrje3W1MguyP6vydxr6wlezWfGBqvWcUM1Dj3CWFGnnsYAPGJMiOb3TtMbbBdl3Cbs0HFugbRjHHa117hVC/OmzkE0FvZu1IO/lZ4S/ntil6ld2aj5H6oiWjAe2UtOO1BT9HfdZTWA9dRp4s6rcOijq2wVrnPmP9SENR5nXHVLQsy723ZVnO2O6SHH06jHw1L89dxO7NAwXdXTI3OZWo87JlcxMXJZFUcN5diaB2xDg9LsQSAQ3PHc+d7N48IImHDW+tF2zkCwv+QXZvHpqXzynGlGiH9U2Tte9dOucWdm8eWpbNfRzBp57v52nNJTmPsXvzQN+nDpRZT7StUvAgdOG54NYPVtmbqUzWvdqun/i0UFyT0TVxsnxwtb2F+gzUFKS5MuvgVF8gsw481Bbxw20cL4xinvesdXwtH3gDnxTgIadlHoB0ffQTZ6Z8aJ19s1gb6NtRoY4TF/XZ7mH35gEPvRAF71+fn5QLtrNS3SqFDta6/pPIYG1hpoy238t+/dWORJpXGti9OaAviwgsjqxDdZ70zhPSYjg8dUmhI/We/yRCnCxDNB28uqjvUJ07zeFkGfXb5nGY7AOCxch3r7H/Kkc1GXTXgBdfRzBV4sRNToS4jgpFjD590xpt8Hwq+3evtskjwLccJruQ36nokHRmk3uXFNK/wfEGC9F719n7iZsqkR7USa5vvXOp7A9fTfLg1e5TXuBw2UG0LO9WBBNvsh2V6kosVLk+erFm1kiBxza6xDE3EfSkJJcIeM4SR3WSi/rU1ZiSImnyhTv5vuD96hmKxWGnF50+uwPn7j/SgZQfx8LuzMcHPJXfQfADLGI8jlcO4nEO+V2Kjgfc4qsKptNJaiN4v/o7uJXSD9o8i7odwg9iUUwOPz2g3wcQH2VRrd2+2TcisPjESWf1vz523c51MNs2KQyig5LHdUjyGPntku/foN2BQ9WYqAtpm8k3+Faqo5gUm2VcOMRvMb8SgPMeGqV4xO1EcDFVYGsxsunPpbDeasebUjBGuETy8YbC5ZLv/dC5VPKxCnUxjswfEU8+yffBSls+fPbSdjztD0n6syRvNQJXiMBh75nhOneenA4IfjQeKnpGCk4EXI9IwSN1BY9KPt5Q9Aw4Pv4q28gHkpBnfPGbgWJRTLrOCvh7llgDCL4iWmZ10VbKgkuIS9d540c3WPOY1+maOO73LpJ/i3j5awG8WCsYnJN6xH0jh8se9IBzNu7Il7Bu+k+CKbeCBZ2m/yTJsBYmQajfwfNdJJJeV9qOtFgtLP6plKtD6b7K23qk+HqskUPkm2JwuOyDPmQ3lVquxNQ4zGJ8xGPUaScSO9nkROhOiLqQ1kptwZen65XD5MvUXwpG0Bd43Invsbucal9lm0u/5jDaKRKbCnvvm5wIcXxHUtSW+58mH3TNbi8eOj9QxN/X4WChFaO9G0KPT2zVrm4rzb0eG8B+MromDnfsGNq0UFtj30sS/7dlXpQzxwxmcL6wWP4BQzYeag6H8f4AAAAASUVORK5CYII=',4,'1000000000000') ON CONFLICT (\"address\") DO UPDATE SET \"timestamp\"=EXCLUDED.\"timestamp\",\"name\"=EXCLUDED.\"name\",\"symbol\"=EXCLUDED.\"symbol\",\"decimals\"=EXCLUDED.\"decimals\",\"total_supply\"=EXCLUDED.\"total_supply\" was aborted: ERROR: value too long for type character varying(512) Call getNextException to see other errors in the batch.\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\n\n\tat io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)\n\t... 10 more\nCaused by: java.sql.SQLException: java.sql.BatchUpdateException: Batch entry 103 INSERT INTO \"erc20_metadata\" (\"address\",\"timestamp\",\"name\",\"symbol\",\"decimals\",\"total_supply\") VALUES ('0x493699e547bfff3668bd195ea528b33cdee522a1','2017-12-30 17:15:29+00'::timestamp,'BitProCoinV4','BPCV4 iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAYAAAAeP4ixAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAfdSURBVGhD7VlrbBRVFF4fqFGjAm13Zne73e7OFm2iRjFCxGjUGP3h45ePxBhNIMZoYmKMUHxQMZHu9sFDC1pSTWi7j64PxAeJCCm0CBGJMbV0t6WlpVaoUF597G6X2vE7d89dh9r4A7oDP/olJ5n73XvP+c6de+/MnbHMYAYzuLTQWZ7r0XXLZVzMgLhYpXozFy9txMrVRVG/7QkuZtBTWnBNrFz5oHPdnBuYMg8iuN/2UFtp8VVMCeiR4qsmGrVcLmbQWuacHfWruybfjf2l6rXgt7b78hYyZT7aqhxzMMqfT54SybC2lC8zwIh/BivlooAesVwBbku0XF3PVAYjkQJlqgHJGjp8yi0xv3ok5lPvZUokMhbWnuKiaINRH++osC5gSoASQN+RrgprHlMC6PtcqlF7mYvTBz1QlJMIex7Qa+bPYuoc0EhD1LG2VQ6NyhBx/1jIe1AvtVxOZSSxKepXksZpGKuwPo27qYMvY8qiN91/ZTLs3TAW9kaYmn6MhT13JkPa7mTI+/ZE3W3XMS3Qs6bgJhpZCN5H02W0XnNAjJ4MeZ6kBYxERyE4ys0tHVUOe8yvnEDyf7f5VCdx+jfqtejzLRI5QwMnGmYLaYFaDwL2JkJFDzEtgES+oBGGwFcoGSR9Fm23R8utL/LI/8hN6Q42pDl1K5WpPXxuoeRh74tG2UYq6F1AIoXQkLaYaUpkSToR9QhNIQjqw92b6KxQt7Do76hdh896N64nmHuJOEzDNZQE2o+PBufZiDMFCLpWBpaLmhaySEQkozyPRFupTd+H+U3GRKLlymbZjqZVKqwtxHT6m9rCfqA2poFGDcET6WS04URjUSG24BxDIj/IRE5vKmyTiZBwXJ8VZWwO5At34xtOQk8Gva+JAGYCgSNSAGzb/hrLrH8TUVPxgOeAEIfphTUxQIkgwdcNbZqSjVox1Us/qYhnPrs3Dwj8rBQgRGCKZETChjYVdsq6rip1j0iEHp5cj8X/lVwbZEholLZfdm8e9M0FNxlHE9eNUiTZUL2bdjdR11ed34S10Yw7E8u08SsB1PVl+oe9bezafGBED2eEhLxxel5IocP1nj9l3cBG10/gfkH9mKzvrLR9JevTponN4KKAghvFYKv9TQqNN3iOSZ4WPKbSKVlHdmidY7uxL3xVs1vzgeBVRjGHP7LvzCQS9AxJfrje3W1MguyP6vydxr6wlezWfGBqvWcUM1Dj3CWFGnnsYAPGJMiOb3TtMbbBdl3Cbs0HFugbRjHHa117hVC/OmzkE0FvZu1IO/lZ4S/ntil6ld2aj5H6oiWjAe2UtOO1BT9HfdZTWA9dRp4s6rcOijq2wVrnPmP9SENR5nXHVLQsy723ZVnO2O6SHH06jHw1L89dxO7NAwXdXTI3OZWo87JlcxMXJZFUcN5diaB2xDg9LsQSAQ3PHc+d7N48IImHDW+tF2zkCwv+QXZvHpqXzynGlGiH9U2Tte9dOucWdm8eWpbNfRzBp57v52nNJTmPsXvzQN+nDpRZT7StUvAgdOG54NYPVtmbqUzWvdqun/i0UFyT0TVxsnxwtb2F+gzUFKS5MuvgVF8gsw481Bbxw20cL4xinvesdXwtH3gDnxTgIadlHoB0ffQTZ6Z8aJ19s1gb6NtRoY4TF/XZ7mH35gEPvRAF71+fn5QLtrNS3SqFDta6/pPIYG1hpoy238t+/dWORJpXGti9OaAviwgsjqxDdZ70zhPSYjg8dUmhI/We/yRCnCxDNB28uqjvUJ07zeFkGfXb5nGY7AOCxch3r7H/Kkc1GXTXgBdfRzBV4sRNToS4jgpFjD590xpt8Hwq+3evtskjwLccJruQ36nokHRmk3uXFNK/wfEGC9F719n7iZsqkR7USa5vvXOp7A9fTfLg1e5TXuBw2UG0LO9WBBNvsh2V6kosVLk+erFm1kiBxza6xDE3EfSkJJcIeM4SR3WSi/rU1ZiSImnyhTv5vuD96hmKxWGnF50+uwPn7j/SgZQfx8LuzMcHPJXfQfADLGI8jlcO4nEO+V2Kjgfc4qsKptNJaiN4v/o7uJXSD9o8i7odwg9iUUwOPz2g3wcQH2VRrd2+2TcisPjESWf1vz523c51MNs2KQyig5LHdUjyGPntku/foN2BQ9WYqAtpm8k3+Faqo5gUm2VcOMRvMb8SgPMeGqV4xO1EcDFVYGsxsunPpbDeasebUjBGuETy8YbC5ZLv/dC5VPKxCnUxjswfEU8+yffBSls+fPbSdjztD0n6syRvNQJXiMBh75nhOneenA4IfjQeKnpGCk4EXI9IwSN1BY9KPt5Q9Aw4Pv4q28gHkpBnfPGbgWJRTLrOCvh7llgDCL4iWmZ10VbKgkuIS9d540c3WPOY1+maOO73LpJ/i3j5awG8WCsYnJN6xH0jh8se9IBzNu7Il7Bu+k+CKbeCBZ2m/yTJsBYmQajfwfNdJJJeV9qOtFgtLP6plKtD6b7K23qk+HqskUPkm2JwuOyDPmQ3lVquxNQ4zGJ8xGPUaScSO9nkROhOiLqQ1kptwZen65XD5MvUXwpG0Bd43Invsbucal9lm0u/5jDaKRKbCnvvm5wIcXxHUtSW+58mH3TNbi8eOj9QxN/X4WChFaO9G0KPT2zVrm4rzb0eG8B+MromDnfsGNq0UFtj30sS/7dlXpQzxwxmcL6wWP4BQzYeag6H8f4AAAAASUVORK5CYII=',4,'1000000000000') ON CONFLICT (\"address\") DO UPDATE SET \"timestamp\"=EXCLUDED.\"timestamp\",\"name\"=EXCLUDED.\"name\",\"symbol\"=EXCLUDED.\"symbol\",\"decimals\"=EXCLUDED.\"decimals\",\"total_supply\"=EXCLUDED.\"total_supply\" was aborted: ERROR: value too long for type character varying(512) Call getNextException to see other errors in the batch.\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\norg.postgresql.util.PSQLException: ERROR: value too long for type character varying(512)\n\n\t... 12 more\n
```
|
process
|
value too long on contract i m submitting a feature request bug report bug report current trace org apache kafka connect errors connectexception exiting workersinktask due to unrecoverable exception n tat org apache kafka connect runtime workersinktask delivermessages workersinktask java n tat org apache kafka connect runtime workersinktask poll workersinktask java n tat org apache kafka connect runtime workersinktask iteration workersinktask java n tat org apache kafka connect runtime workersinktask execute workersinktask java n tat org apache kafka connect runtime workertask dorun workertask java n tat org apache kafka connect runtime workertask run workertask java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by org apache kafka connect errors connectexception java sql sqlexception java sql batchupdateexception batch entry insert into metadata address timestamp name symbol decimals total supply values timestamp li pbufzidmfclpwbpalmhayseqkozyprfuptd h jagycgsnsagzb we yffbsls k on conflict address do update set timestamp excluded timestamp name excluded name symbol excluded symbol decimals excluded decimals total supply excluded total supply was aborted error value too long for type character varying call getnextexception to see other errors in the batch norg postgresql util psqlexception error value too long for type character varying norg postgresql util psqlexception error value too long for type character varying n n tat io confluent connect jdbc sink jdbcsinktask put jdbcsinktask java n tat org apache kafka connect runtime workersinktask delivermessages workersinktask java n t more ncaused by java sql sqlexception java sql batchupdateexception batch entry insert into metadata address timestamp name symbol decimals total supply values timestamp li pbufzidmfclpwbpalmhayseqkozyprfuptd h jagycgsnsagzb we yffbsls k on conflict address do update set timestamp excluded timestamp name excluded name symbol excluded symbol decimals excluded decimals total supply excluded total supply was aborted error value too long for type character varying call getnextexception to see other errors in the batch norg postgresql util psqlexception error value too long for type character varying norg postgresql util psqlexception error value too long for type character varying n n t more n
| 1
|
98,634
| 11,091,661,919
|
IssuesEvent
|
2019-12-15 13:56:25
|
Kokan/toowtrsywen
|
https://api.github.com/repos/Kokan/toowtrsywen
|
closed
|
provide a five-star user experience for the reader of the requirement documentation
|
documentation
|
Attila Kovacs, teacher at lecture: a requirement documentation of good quality means using cross-references (clickable in-pdf links, e.g. "Related requirement(s): ...")
Someone to investigate Google Doc's built-in possibilities (in the Add-ons section maybe?).
Converting each (major) doc version into some word processor (MS Word), and re-create the links again and agian is beleived to work at a high cost of maintenance.
And, of course, there's LaTeX. It can work.
|
1.0
|
provide a five-star user experience for the reader of the requirement documentation - Attila Kovacs, teacher at lecture: a requirement documentation of good quality means using cross-references (clickable in-pdf links, e.g. "Related requirement(s): ...")
Someone to investigate Google Doc's built-in possibilities (in the Add-ons section maybe?).
Converting each (major) doc version into some word processor (MS Word), and re-create the links again and agian is beleived to work at a high cost of maintenance.
And, of course, there's LaTeX. It can work.
|
non_process
|
provide a five star user experience for the reader of the requirement documentation attila kovacs teacher at lecture a requirement documentation of good quality means using cross references clickable in pdf links e g related requirement s someone to investigate google doc s built in possibilities in the add ons section maybe converting each major doc version into some word processor ms word and re create the links again and agian is beleived to work at a high cost of maintenance and of course there s latex it can work
| 0
|
721,039
| 24,815,959,511
|
IssuesEvent
|
2022-10-25 13:11:13
|
zitadel/zitadel
|
https://api.github.com/repos/zitadel/zitadel
|
closed
|
Organization pagination does not work
|
type: bug category: frontend state: ready priority: low
|
**Describe the bug**
If I show all the organizations of an instance, the first page works fine. As soon as I navigate to the second page, All the organizations are shown.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to instance organizations
2. Click on next page
3. All organizations are shown
**Expected behavior**
Pagination
|
1.0
|
Organization pagination does not work - **Describe the bug**
If I show all the organizations of an instance, the first page works fine. As soon as I navigate to the second page, All the organizations are shown.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to instance organizations
2. Click on next page
3. All organizations are shown
**Expected behavior**
Pagination
|
non_process
|
organization pagination does not work describe the bug if i show all the organizations of an instance the first page works fine as soon as i navigate to the second page all the organizations are shown to reproduce steps to reproduce the behavior go to instance organizations click on next page all organizations are shown expected behavior pagination
| 0
|
76,124
| 21,182,579,344
|
IssuesEvent
|
2022-04-08 09:24:35
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
opened
|
[Bug]: Widgets overlap when multiple widgets collide with the edge of canvas
|
Bug Production Needs Triaging UI Builders Pod Drag & Drop Reflow & Resize
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Uploading Reflow Overlapping Bug.mp4…
### Steps To Reproduce
1. Create a layout similar to the one in the video
2. Move the bottom most widget to the top till the top most widget completely resizes
3. See that the widgets overlap by the edge of the canvas.
### Public Sample App
_No response_
### Version
Cloud / SelfHosted
|
1.0
|
[Bug]: Widgets overlap when multiple widgets collide with the edge of canvas - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Uploading Reflow Overlapping Bug.mp4…
### Steps To Reproduce
1. Create a layout similar to the one in the video
2. Move the bottom most widget to the top till the top most widget completely resizes
3. See that the widgets overlap by the edge of the canvas.
### Public Sample App
_No response_
### Version
Cloud / SelfHosted
|
non_process
|
widgets overlap when multiple widgets collide with the edge of canvas is there an existing issue for this i have searched the existing issues description uploading reflow overlapping bug … steps to reproduce create a layout similar to the one in the video move the bottom most widget to the top till the top most widget completely resizes see that the widgets overlap by the edge of the canvas public sample app no response version cloud selfhosted
| 0
|
20,644
| 27,322,738,702
|
IssuesEvent
|
2023-02-24 21:34:07
|
googleapis/google-api-java-client
|
https://api.github.com/repos/googleapis/google-api-java-client
|
closed
|
Update clirr check to ignore OOB deprecation
|
priority: p2 type: process
|
Since v2.2.0, clirr check on PRs will fail on `OOB_REDIRECT_URI` as part of the deprecated and removed OAuth OOB flow ([discussion](https://github.com/googleapis/google-api-java-client/pull/2242#discussion_r1086200169)). This is currently blocking PRs for automerge.
```
Error: 6011: com.google.api.client.googleapis.auth.oauth2.GoogleOAuthConstants: Field OOB_REDIRECT_URI has been removed, but it was previously a constant
```
Look into updating this check, likely as an addition to [clirr-ignored-differences.xml](https://github.com/googleapis/google-api-java-client/blob/main/clirr-ignored-differences.xml).
|
1.0
|
Update clirr check to ignore OOB deprecation - Since v2.2.0, clirr check on PRs will fail on `OOB_REDIRECT_URI` as part of the deprecated and removed OAuth OOB flow ([discussion](https://github.com/googleapis/google-api-java-client/pull/2242#discussion_r1086200169)). This is currently blocking PRs for automerge.
```
Error: 6011: com.google.api.client.googleapis.auth.oauth2.GoogleOAuthConstants: Field OOB_REDIRECT_URI has been removed, but it was previously a constant
```
Look into updating this check, likely as an addition to [clirr-ignored-differences.xml](https://github.com/googleapis/google-api-java-client/blob/main/clirr-ignored-differences.xml).
|
process
|
update clirr check to ignore oob deprecation since clirr check on prs will fail on oob redirect uri as part of the deprecated and removed oauth oob flow this is currently blocking prs for automerge error com google api client googleapis auth googleoauthconstants field oob redirect uri has been removed but it was previously a constant look into updating this check likely as an addition to
| 1
|
629,337
| 20,029,606,645
|
IssuesEvent
|
2022-02-02 02:56:16
|
ocaml-bench/sandmark-nightly
|
https://api.github.com/repos/ocaml-bench/sandmark-nightly
|
closed
|
Parallel benchmarks should be run on ocaml/ocaml#trunk
|
high-priority
|
The parallel benchmarks are only being run on the 5.00+domains variant which points to ocaml-multicore/ocaml-multicore#5.00. This branch is no longer in development since multicore is merged with upstream OCaml. The parallel benchmarks should now be run on ocaml/ocaml#trunk. No need to run the benchmarks on ocaml-multicore/ocaml-multicore#5.00 hereafter.
CC @shubhamkumar13 @shakthimaan.
|
1.0
|
Parallel benchmarks should be run on ocaml/ocaml#trunk - The parallel benchmarks are only being run on the 5.00+domains variant which points to ocaml-multicore/ocaml-multicore#5.00. This branch is no longer in development since multicore is merged with upstream OCaml. The parallel benchmarks should now be run on ocaml/ocaml#trunk. No need to run the benchmarks on ocaml-multicore/ocaml-multicore#5.00 hereafter.
CC @shubhamkumar13 @shakthimaan.
|
non_process
|
parallel benchmarks should be run on ocaml ocaml trunk the parallel benchmarks are only being run on the domains variant which points to ocaml multicore ocaml multicore this branch is no longer in development since multicore is merged with upstream ocaml the parallel benchmarks should now be run on ocaml ocaml trunk no need to run the benchmarks on ocaml multicore ocaml multicore hereafter cc shakthimaan
| 0
|
25,762
| 25,837,969,906
|
IssuesEvent
|
2022-12-12 21:18:51
|
pulumi/registry
|
https://api.github.com/repos/pulumi/registry
|
closed
|
Change H1s to match selected tab
|
resolution/fixed kind/enhancement area/registry impact/usability
|
## overview
we would like to change the h1s in our package packages to reflect the selected tab. this is a change intended to improve accessibility and seo. #1702
## example
currently we have an h1 in the heading of the package page that is only the name of the package. eg. `Datadog` or `Azure Native`. whenever page content changes, the h1 should reflect that.
now, we would like to move the h1 out of the heading and into the content area. the new h1s are as follows:
* the overview tab will remain as `Packagename` (e.g. `Datadog`, `Azure Native`)
* the installation tab should read `Packagename: Installation & Configuration`
* the guides tab should read `Packagename: How-to-Guides`
the API docs are an exception and are handled in another issue https://github.com/pulumi/registry/issues/1702
## reference for h1 position
<img width="1440" alt="1280+ no alert (2)" src="https://user-images.githubusercontent.com/97129308/202808527-9d320d18-7077-4302-8a50-85baa388a4f8.png">
|
True
|
Change H1s to match selected tab - ## overview
we would like to change the h1s in our package packages to reflect the selected tab. this is a change intended to improve accessibility and seo. #1702
## example
currently we have an h1 in the heading of the package page that is only the name of the package. eg. `Datadog` or `Azure Native`. whenever page content changes, the h1 should reflect that.
now, we would like to move the h1 out of the heading and into the content area. the new h1s are as follows:
* the overview tab will remain as `Packagename` (e.g. `Datadog`, `Azure Native`)
* the installation tab should read `Packagename: Installation & Configuration`
* the guides tab should read `Packagename: How-to-Guides`
the API docs are an exception and are handled in another issue https://github.com/pulumi/registry/issues/1702
## reference for h1 position
<img width="1440" alt="1280+ no alert (2)" src="https://user-images.githubusercontent.com/97129308/202808527-9d320d18-7077-4302-8a50-85baa388a4f8.png">
|
non_process
|
change to match selected tab overview we would like to change the in our package packages to reflect the selected tab this is a change intended to improve accessibility and seo example currently we have an in the heading of the package page that is only the name of the package eg datadog or azure native whenever page content changes the should reflect that now we would like to move the out of the heading and into the content area the new are as follows the overview tab will remain as packagename e g datadog azure native the installation tab should read packagename installation configuration the guides tab should read packagename how to guides the api docs are an exception and are handled in another issue reference for position img width alt no alert src
| 0
|
275,360
| 23,909,885,020
|
IssuesEvent
|
2022-09-09 07:06:25
|
enonic/app-contentstudio
|
https://api.github.com/repos/enonic/app-contentstudio
|
opened
|
Add ui-tests for clearing properrties in option-set
|
Test
|
Verify issue Unchecking an option in an option-set should clear its underlying property set #5096
|
1.0
|
Add ui-tests for clearing properrties in option-set - Verify issue Unchecking an option in an option-set should clear its underlying property set #5096
|
non_process
|
add ui tests for clearing properrties in option set verify issue unchecking an option in an option set should clear its underlying property set
| 0
|
177,589
| 21,479,403,454
|
IssuesEvent
|
2022-04-26 16:14:40
|
HughC-GH-Demo/Java-Demo
|
https://api.github.com/repos/HughC-GH-Demo/Java-Demo
|
closed
|
CVE-2019-10086 (High) detected in commons-beanutils-core-1.8.3.jar - autoclosed
|
security vulnerability
|
## CVE-2019-10086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-core-1.8.3.jar</b></p></summary>
<p>The Apache Software Foundation provides support for the Apache community of open-source software projects.
The Apache projects are characterized by a collaborative, consensus based development process, an open and
pragmatic software license, and a desire to create high quality software that leads the way in its field.
We consider ourselves not simply a group of projects sharing a server, but rather a community of developers
and users.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.3/commons-beanutils-core-1.8.3.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-beanutils-core-1.8.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HughC-GH-Demo/Java-Demo/commit/d004516188e736d7b818d1834c16cb6902032499">d004516188e736d7b818d1834c16cb6902032499</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.
<p>Publish Date: 2019-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086>CVE-2019-10086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f">https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f</a></p>
<p>Release Date: 2019-08-20</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils-core","packageVersion":"1.8.3","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;commons-beanutils:commons-beanutils-core:1.8.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-10086","vulnerabilityDetails":"In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-10086 (High) detected in commons-beanutils-core-1.8.3.jar - autoclosed - ## CVE-2019-10086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-core-1.8.3.jar</b></p></summary>
<p>The Apache Software Foundation provides support for the Apache community of open-source software projects.
The Apache projects are characterized by a collaborative, consensus based development process, an open and
pragmatic software license, and a desire to create high quality software that leads the way in its field.
We consider ourselves not simply a group of projects sharing a server, but rather a community of developers
and users.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.3/commons-beanutils-core-1.8.3.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-beanutils-core-1.8.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HughC-GH-Demo/Java-Demo/commit/d004516188e736d7b818d1834c16cb6902032499">d004516188e736d7b818d1834c16cb6902032499</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.
<p>Publish Date: 2019-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086>CVE-2019-10086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f">https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f</a></p>
<p>Release Date: 2019-08-20</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils-core","packageVersion":"1.8.3","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;commons-beanutils:commons-beanutils-core:1.8.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-10086","vulnerabilityDetails":"In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in commons beanutils core jar autoclosed cve high severity vulnerability vulnerable library commons beanutils core jar the apache software foundation provides support for the apache community of open source software projects the apache projects are characterized by a collaborative consensus based development process an open and pragmatic software license and a desire to create high quality software that leads the way in its field we consider ourselves not simply a group of projects sharing a server but rather a community of developers and users path to dependency file pom xml path to vulnerable library home wss scanner repository commons beanutils commons beanutils core commons beanutils core jar dependency hierarchy esapi jar root library x commons beanutils core jar vulnerable library found in head commit a href found in base branch main vulnerability details in apache commons beanutils a special beanintrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all java objects we however were not using this by default characteristic of the propertyutilsbean publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp esapi esapi commons beanutils commons beanutils core isminimumfixversionavailable true minimumfixversion commons beanutils commons beanutils isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache commons beanutils a special beanintrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all java objects we however were not using this by default characteristic of the propertyutilsbean vulnerabilityurl
| 0
|
9,364
| 12,371,827,308
|
IssuesEvent
|
2020-05-18 19:15:54
|
googleapis/python-firestore
|
https://api.github.com/repos/googleapis/python-firestore
|
closed
|
lint check fails on master
|
api: firestore type: process
|
A recent change of flake8 or its config causes the code style check to fail on the master branch - it complains about a ambiguous variable name in few files:
tests/unit/v1/test_order.py:210:16: E741 ambiguous variable name 'l'
tests/unit/v1beta1/test_order.py:210:16: E741 ambiguous variable name 'l'
/cc @crwilcox
|
1.0
|
lint check fails on master - A recent change of flake8 or its config causes the code style check to fail on the master branch - it complains about a ambiguous variable name in few files:
tests/unit/v1/test_order.py:210:16: E741 ambiguous variable name 'l'
tests/unit/v1beta1/test_order.py:210:16: E741 ambiguous variable name 'l'
/cc @crwilcox
|
process
|
lint check fails on master a recent change of or its config causes the code style check to fail on the master branch it complains about a ambiguous variable name in few files tests unit test order py ambiguous variable name l tests unit test order py ambiguous variable name l cc crwilcox
| 1
|
258,894
| 22,356,350,548
|
IssuesEvent
|
2022-06-15 15:58:08
|
VirtusLab/git-machete
|
https://api.github.com/repos/VirtusLab/git-machete
|
closed
|
Add tests for resolving location of machete file from worktrees
|
testing underlying git
|
Follow-up to #361. See #360 on what's the problem (and hence, what needs to be tested).
Would be good to resolve #362 first.
|
1.0
|
Add tests for resolving location of machete file from worktrees - Follow-up to #361. See #360 on what's the problem (and hence, what needs to be tested).
Would be good to resolve #362 first.
|
non_process
|
add tests for resolving location of machete file from worktrees follow up to see on what s the problem and hence what needs to be tested would be good to resolve first
| 0
|
160,765
| 20,118,447,277
|
IssuesEvent
|
2022-02-07 22:19:02
|
sureng-ws-ibm/t1
|
https://api.github.com/repos/sureng-ws-ibm/t1
|
opened
|
selenium-webdriver-2.53.3.tgz: 1 vulnerabilities (highest severity is: 5.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>selenium-webdriver-2.53.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/t1/commit/98c35103c7cca4d27c850a6900767a9b0c81bda5">98c35103c7cca4d27c850a6900767a9b0c81bda5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1002204](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | adm-zip-0.4.4.tgz | Transitive | 3.0.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1002204</summary>
### Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-2.53.3.tgz (Root Library)
- :x: **adm-zip-0.4.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/t1/commit/98c35103c7cca4d27c850a6900767a9b0c81bda5">98c35103c7cca4d27c850a6900767a9b0c81bda5</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
<p>Publish Date: 2018-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204>CVE-2018-1002204</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1002204">https://nvd.nist.gov/vuln/detail/CVE-2018-1002204</a></p>
<p>Release Date: 2018-07-25</p>
<p>Fix Resolution (adm-zip): 0.4.9</p>
<p>Direct dependency fix Resolution (selenium-webdriver): 3.0.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"selenium-webdriver","packageVersion":"2.53.3","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"selenium-webdriver:2.53.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.0.0","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2018-1002204","vulnerabilityDetails":"adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as \u0027Zip-Slip\u0027.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
True
|
selenium-webdriver-2.53.3.tgz: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>selenium-webdriver-2.53.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/t1/commit/98c35103c7cca4d27c850a6900767a9b0c81bda5">98c35103c7cca4d27c850a6900767a9b0c81bda5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1002204](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | adm-zip-0.4.4.tgz | Transitive | 3.0.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1002204</summary>
### Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-2.53.3.tgz (Root Library)
- :x: **adm-zip-0.4.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/t1/commit/98c35103c7cca4d27c850a6900767a9b0c81bda5">98c35103c7cca4d27c850a6900767a9b0c81bda5</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
<p>Publish Date: 2018-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204>CVE-2018-1002204</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1002204">https://nvd.nist.gov/vuln/detail/CVE-2018-1002204</a></p>
<p>Release Date: 2018-07-25</p>
<p>Fix Resolution (adm-zip): 0.4.9</p>
<p>Direct dependency fix Resolution (selenium-webdriver): 3.0.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"selenium-webdriver","packageVersion":"2.53.3","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"selenium-webdriver:2.53.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.0.0","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2018-1002204","vulnerabilityDetails":"adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as \u0027Zip-Slip\u0027.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
non_process
|
selenium webdriver tgz vulnerabilities highest severity is vulnerable library selenium webdriver tgz path to dependency file package json path to vulnerable library node modules adm zip package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium adm zip tgz transitive ✅ details cve vulnerable library adm zip tgz a javascript implementation of zip for nodejs allows user to create or extract zip files both in memory or to from disk library home page a href path to dependency file package json path to vulnerable library node modules adm zip package json dependency hierarchy selenium webdriver tgz root library x adm zip tgz vulnerable library found in head commit a href found in base branch develop vulnerability details adm zip npm library before is vulnerable to directory traversal allowing attackers to write to arbitrary files via a dot dot slash in a zip archive entry that is mishandled during extraction this vulnerability is also known as zip slip publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution adm zip direct dependency fix resolution selenium webdriver rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue istransitivedependency false dependencytree selenium webdriver isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails adm zip npm library before is vulnerable to directory traversal allowing attackers to write to arbitrary files via a dot dot slash in a zip archive entry that is mishandled during extraction this vulnerability is also known as slip vulnerabilityurl
| 0
|
14,373
| 17,397,277,253
|
IssuesEvent
|
2021-08-02 14:52:28
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Huge Spike in unkown user agents
|
log-processing question
|
For a couple of days, I can see a huge spike in unknown user agents.
I have done some tests and android and windows devices seem still to be logged correctly.
I can't verify apple devices but I assume they aren't logged correctly anymore. Idk how I reached a spike of 17% of Unknown user agents.
|
1.0
|
Huge Spike in unkown user agents -
For a couple of days, I can see a huge spike in unknown user agents.
I have done some tests and android and windows devices seem still to be logged correctly.
I can't verify apple devices but I assume they aren't logged correctly anymore. Idk how I reached a spike of 17% of Unknown user agents.
|
process
|
huge spike in unkown user agents for a couple of days i can see a huge spike in unknown user agents i have done some tests and android and windows devices seem still to be logged correctly i can t verify apple devices but i assume they aren t logged correctly anymore idk how i reached a spike of of unknown user agents
| 1
|
20,883
| 27,708,103,521
|
IssuesEvent
|
2023-03-14 12:36:43
|
toggl/track-windows-feedback
|
https://api.github.com/repos/toggl/track-windows-feedback
|
closed
|
Suggestion: Please add release date to your changelog
|
processed
|
The new Windows native app has been wonderful so far! Thank you for the great work. I just noticed we can now see a changelog in the About > Whats New menu. This is great. But could you add dates to each version entry so I can get an idea of when something was updated? I never know if there were recent updates to look at or if these maybe were from months ago. Thanks!
|
1.0
|
Suggestion: Please add release date to your changelog - The new Windows native app has been wonderful so far! Thank you for the great work. I just noticed we can now see a changelog in the About > Whats New menu. This is great. But could you add dates to each version entry so I can get an idea of when something was updated? I never know if there were recent updates to look at or if these maybe were from months ago. Thanks!
|
process
|
suggestion please add release date to your changelog the new windows native app has been wonderful so far thank you for the great work i just noticed we can now see a changelog in the about whats new menu this is great but could you add dates to each version entry so i can get an idea of when something was updated i never know if there were recent updates to look at or if these maybe were from months ago thanks
| 1
|
157,486
| 19,957,420,918
|
IssuesEvent
|
2022-01-28 02:02:33
|
Aivolt1/u-i-u-x-volt-ai
|
https://api.github.com/repos/Aivolt1/u-i-u-x-volt-ai
|
opened
|
CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz
|
security vulnerability
|
## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-get</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-manifest-3.9.0.tgz (Root Library)
- sharp-0.28.3.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution: simple-get - 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz - ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-get</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-manifest-3.9.0.tgz (Root Library)
- sharp-0.28.3.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution: simple-get - 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in simple get tgz cve high severity vulnerability vulnerable library simple get tgz simplest way to make http get requests supports https redirects gzip deflate streams in library home page a href path to dependency file package json path to vulnerable library node modules simple get dependency hierarchy gatsby plugin manifest tgz root library sharp tgz x simple get tgz vulnerable library found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in npm simple get prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution simple get step up your open source security game with whitesource
| 0
|
430,279
| 12,450,632,181
|
IssuesEvent
|
2020-05-27 09:07:12
|
wso2/carbon-apimgt
|
https://api.github.com/repos/wso2/carbon-apimgt
|
opened
|
[Publisher] Can not scroll Open API editor using mouse wheel
|
Priority/Normal Type/Bug
|
### Description:
Can not scroll Open API defintion using mouse wheel when focusing Open API editor
### Steps to reproduce:
- Download WSO2 APIM 3.1.0
- Start server
- Go to publisher
- Create an API with an Open API Definition
- Save
- Edit API
- Go to API Definition > Edit
- Clic on left part and try to use mouse wheel
- The left part doesn't move
### Affected Product Version:
WSO2 APIM 3.1.0
### Environment details (with versions):
- OS: RedHat
- Client: Firefox 76.0.1
- Env (Docker/K8s):
|
1.0
|
[Publisher] Can not scroll Open API editor using mouse wheel - ### Description:
Can not scroll Open API defintion using mouse wheel when focusing Open API editor
### Steps to reproduce:
- Download WSO2 APIM 3.1.0
- Start server
- Go to publisher
- Create an API with an Open API Definition
- Save
- Edit API
- Go to API Definition > Edit
- Clic on left part and try to use mouse wheel
- The left part doesn't move
### Affected Product Version:
WSO2 APIM 3.1.0
### Environment details (with versions):
- OS: RedHat
- Client: Firefox 76.0.1
- Env (Docker/K8s):
|
non_process
|
can not scroll open api editor using mouse wheel description can not scroll open api defintion using mouse wheel when focusing open api editor steps to reproduce download apim start server go to publisher create an api with an open api definition save edit api go to api definition edit clic on left part and try to use mouse wheel the left part doesn t move affected product version apim environment details with versions os redhat client firefox env docker
| 0
|
81,459
| 3,591,211,907
|
IssuesEvent
|
2016-02-01 10:39:10
|
ow2-proactive/scheduling
|
https://api.github.com/repos/ow2-proactive/scheduling
|
closed
|
Replicate job with 1000 tasks fails to execute
|
priority:minor resolution:fixed type:bug
|
<a href="https://jira.activeeon.com/browse/SCHEDULING-2218" title="SCHEDULING-2218">Original issue</a> created by <a href="mailto:youri.bonnaffe_AT_activeeon.com">Youri Bonnaffe</a> on 14, Jan 2015 at 10:51 AM - SCHEDULING-2218
<hr />
When submitting a replicate job (split/process/merge) with 1000 replicated tasks, the merge task will fail because of a database error (see below).
It looks like it could be related to org.ow2.proactive.scheduler.core.db.SchedulerDBManager#loadTasksResults where an IN clause is used with task ids (here a large number).
```
2015-01-14 10:34:56,189 WARN o.h.e.j.s.SqlExceptionHelper] SQL Error: 20000, SQLState: 42ZA0
[2015-01-14 10:34:56,189 ERROR o.h.e.j.s.SqlExceptionHelper] Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
[2015-01-14 10:34:56,189 WARN o.h.e.j.s.SqlExceptionHelper] SQL Error: 20000, SQLState: XBCM4
[2015-01-14 10:34:56,189 ERROR o.h.e.j.s.SqlExceptionHelper] Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
[2015-01-14 10:34:56,190 WARN o.o.p.s.c.d.TransactionHelper] DB operation failed
org.hibernate.exception.SQLGrammarException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:82)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:146)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy32.prepareStatement(Unknown Source)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:147)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:166)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:145)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1720)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1697)
at org.hibernate.loader.Loader.doQuery(Loader.java:832)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:293)
at org.hibernate.loader.Loader.doList(Loader.java:2382)
at org.hibernate.loader.Loader.doList(Loader.java:2368)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2198)
at org.hibernate.loader.Loader.list(Loader.java:2193)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:470)
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:355)
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:195)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1244)
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadJobResult(SchedulerDBManager.java:1324)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.access$1700(SchedulerDBManager.java:79)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1254)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:95)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.runWithoutTransaction(SchedulerDBManager.java:1655)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadTasksResults(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:123)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:70)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLSyntaxErrorException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:213)
at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:138)
... 31 more
Caused by: java.sql.SQLException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 50 more
Caused by: java.sql.SQLException: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
... 47 more
Caused by: ERROR XBCM4: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.services.bytecode.BCClass.getClassBytecode(Unknown Source)
at org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 43 more
[2015-01-14 10:34:56,192 WARN o.o.p.s.c.TimedDoTaskAction] Failed to start task: DB operation failed
org.ow2.proactive.db.DatabaseManagerException: DB operation failed
at org.ow2.proactive.db.DatabaseManagerExceptionHandler.handle(DatabaseManagerExceptionHandler.java:152)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:99)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.runWithoutTransaction(SchedulerDBManager.java:1655)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadTasksResults(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:123)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:70)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.hibernate.exception.SQLGrammarException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:82)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:146)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy32.prepareStatement(Unknown Source)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:147)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:166)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:145)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1720)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1697)
at org.hibernate.loader.Loader.doQuery(Loader.java:832)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:293)
at org.hibernate.loader.Loader.doList(Loader.java:2382)
at org.hibernate.loader.Loader.doList(Loader.java:2368)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2198)
at org.hibernate.loader.Loader.list(Loader.java:2193)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:470)
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:355)
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:195)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1244)
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadJobResult(SchedulerDBManager.java:1324)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.access$1700(SchedulerDBManager.java:79)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1254)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:95)
... 8 more
Caused by: java.sql.SQLSyntaxErrorException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:213)
at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:138)
... 31 more
Caused by: java.sql.SQLException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 50 more
Caused by: java.sql.SQLException: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
... 47 more
Caused by: ERROR XBCM4: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.services.bytecode.BCClass.getClassBytecode(Unknown Source)
at org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 43 more
[2015-01-14 10:34:56,193 INFO o.o.p.s.c.TimedDoTaskAction] Trying to restart task '2110002'
```
```xml
<?xml version="1.0" encoding="UTF-8"?>
<job
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:proactive:jobdescriptor:3.2"
xsi:schemaLocation="urn:proactive:jobdescriptor:3.2 http://www.activeeon.com/public_content/schemas/proactive/jobdescriptor/3.2/schedulerjob.xsd"
name="Replicate"
priority="normal"
cancelJobOnError="false">
<description>
<![CDATA[ A sample workflow that solves an classic split/process/merge problem. ]]>
</description>
<taskFlow>
<task name="Split">
<description>
<![CDATA[ This task defines some input, here strings to be processed. ]]>
</description>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
result = [0:"abc", 1:"def"]
]]>
</code>
</script>
</scriptExecutable>
<controlFlow >
<replicate>
<script>
<code language="groovy">
<![CDATA[
runs=1000
]]>
</code>
</script>
</replicate>
</controlFlow>
</task>
<task name="Process">
<description>
<![CDATA[ This task will be replicated according to the 'runs' value specified in the replication script. The replication index is used in each task's instance to select the input. ]]>
</description>
<depends>
<task ref="Split"/>
</depends>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
println "hello"
]]>
</code>
</script>
</scriptExecutable>
<controlFlow block="none"></controlFlow>
</task>
<task name="Merge">
<description>
<![CDATA[ As a merge operation, we simply print the results from previous tasks. ]]>
</description>
<depends>
<task ref="Process"/>
</depends>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
println results
]]>
</code>
</script>
</scriptExecutable>
</task>
</taskFlow>
</job>
```
|
1.0
|
Replicate job with 1000 tasks fails to execute - <a href="https://jira.activeeon.com/browse/SCHEDULING-2218" title="SCHEDULING-2218">Original issue</a> created by <a href="mailto:youri.bonnaffe_AT_activeeon.com">Youri Bonnaffe</a> on 14, Jan 2015 at 10:51 AM - SCHEDULING-2218
<hr />
When submitting a replicate job (split/process/merge) with 1000 replicated tasks, the merge task will fail because of a database error (see below).
It looks like it could be related to org.ow2.proactive.scheduler.core.db.SchedulerDBManager#loadTasksResults where an IN clause is used with task ids (here a large number).
```
2015-01-14 10:34:56,189 WARN o.h.e.j.s.SqlExceptionHelper] SQL Error: 20000, SQLState: 42ZA0
[2015-01-14 10:34:56,189 ERROR o.h.e.j.s.SqlExceptionHelper] Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
[2015-01-14 10:34:56,189 WARN o.h.e.j.s.SqlExceptionHelper] SQL Error: 20000, SQLState: XBCM4
[2015-01-14 10:34:56,189 ERROR o.h.e.j.s.SqlExceptionHelper] Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
[2015-01-14 10:34:56,190 WARN o.o.p.s.c.d.TransactionHelper] DB operation failed
org.hibernate.exception.SQLGrammarException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:82)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:146)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy32.prepareStatement(Unknown Source)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:147)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:166)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:145)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1720)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1697)
at org.hibernate.loader.Loader.doQuery(Loader.java:832)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:293)
at org.hibernate.loader.Loader.doList(Loader.java:2382)
at org.hibernate.loader.Loader.doList(Loader.java:2368)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2198)
at org.hibernate.loader.Loader.list(Loader.java:2193)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:470)
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:355)
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:195)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1244)
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadJobResult(SchedulerDBManager.java:1324)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.access$1700(SchedulerDBManager.java:79)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1254)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:95)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.runWithoutTransaction(SchedulerDBManager.java:1655)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadTasksResults(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:123)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:70)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLSyntaxErrorException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:213)
at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:138)
... 31 more
Caused by: java.sql.SQLException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 50 more
Caused by: java.sql.SQLException: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
... 47 more
Caused by: ERROR XBCM4: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.services.bytecode.BCClass.getClassBytecode(Unknown Source)
at org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 43 more
[2015-01-14 10:34:56,192 WARN o.o.p.s.c.TimedDoTaskAction] Failed to start task: DB operation failed
org.ow2.proactive.db.DatabaseManagerException: DB operation failed
at org.ow2.proactive.db.DatabaseManagerExceptionHandler.handle(DatabaseManagerExceptionHandler.java:152)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:99)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.runWithoutTransaction(SchedulerDBManager.java:1655)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadTasksResults(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:123)
at org.ow2.proactive.scheduler.core.TimedDoTaskAction.call(TimedDoTaskAction.java:70)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.hibernate.exception.SQLGrammarException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:82)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:146)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy32.prepareStatement(Unknown Source)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:147)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:166)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:145)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1720)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1697)
at org.hibernate.loader.Loader.doQuery(Loader.java:832)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:293)
at org.hibernate.loader.Loader.doList(Loader.java:2382)
at org.hibernate.loader.Loader.doList(Loader.java:2368)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2198)
at org.hibernate.loader.Loader.list(Loader.java:2193)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:470)
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:355)
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:195)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1244)
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.loadJobResult(SchedulerDBManager.java:1324)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager.access$1700(SchedulerDBManager.java:79)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1254)
at org.ow2.proactive.scheduler.core.db.SchedulerDBManager$29.executeWork(SchedulerDBManager.java:1230)
at org.ow2.proactive.scheduler.core.db.TransactionHelper.runWithoutTransaction(TransactionHelper.java:95)
... 8 more
Caused by: java.sql.SQLSyntaxErrorException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:213)
at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.hibernate.engine.jdbc.internal.proxy.ConnectionProxyHandler.continueInvocation(ConnectionProxyHandler.java:138)
... 31 more
Caused by: java.sql.SQLException: Statement too complex. Try rewriting the query to remove complexity. Eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 50 more
Caused by: java.sql.SQLException: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
... 47 more
Caused by: ERROR XBCM4: Java class file format limit(s) exceeded: method:e4 code_length (146427 > 65535) in generated class org.apache.derby.exe.ac19e6c5dcx014axe7c3xcafex00000765b8783.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.services.bytecode.BCClass.getClassBytecode(Unknown Source)
at org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown Source)
at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 43 more
[2015-01-14 10:34:56,193 INFO o.o.p.s.c.TimedDoTaskAction] Trying to restart task '2110002'
```
```xml
<?xml version="1.0" encoding="UTF-8"?>
<job
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:proactive:jobdescriptor:3.2"
xsi:schemaLocation="urn:proactive:jobdescriptor:3.2 http://www.activeeon.com/public_content/schemas/proactive/jobdescriptor/3.2/schedulerjob.xsd"
name="Replicate"
priority="normal"
cancelJobOnError="false">
<description>
<![CDATA[ A sample workflow that solves an classic split/process/merge problem. ]]>
</description>
<taskFlow>
<task name="Split">
<description>
<![CDATA[ This task defines some input, here strings to be processed. ]]>
</description>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
result = [0:"abc", 1:"def"]
]]>
</code>
</script>
</scriptExecutable>
<controlFlow >
<replicate>
<script>
<code language="groovy">
<![CDATA[
runs=1000
]]>
</code>
</script>
</replicate>
</controlFlow>
</task>
<task name="Process">
<description>
<![CDATA[ This task will be replicated according to the 'runs' value specified in the replication script. The replication index is used in each task's instance to select the input. ]]>
</description>
<depends>
<task ref="Split"/>
</depends>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
println "hello"
]]>
</code>
</script>
</scriptExecutable>
<controlFlow block="none"></controlFlow>
</task>
<task name="Merge">
<description>
<![CDATA[ As a merge operation, we simply print the results from previous tasks. ]]>
</description>
<depends>
<task ref="Process"/>
</depends>
<scriptExecutable>
<script>
<code language="groovy">
<![CDATA[
println results
]]>
</code>
</script>
</scriptExecutable>
</task>
</taskFlow>
</job>
```
|
non_process
|
replicate job with tasks fails to execute original issue created by youri bonnaffe on jan at am scheduling when submitting a replicate job split process merge with replicated tasks the merge task will fail because of a database error see below it looks like it could be related to org proactive scheduler core db schedulerdbmanager loadtasksresults where an in clause is used with task ids here a large number warn o h e j s sqlexceptionhelper sql error sqlstate statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error sql error sqlstate java class file format limit s exceeded method code length in generated class org apache derby exe db operation failed org hibernate exception sqlgrammarexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org hibernate exception internal sqlexceptiontypedelegate convert sqlexceptiontypedelegate java at org hibernate exception internal standardsqlexceptionconverter convert standardsqlexceptionconverter java at org hibernate engine jdbc spi sqlexceptionhelper convert sqlexceptionhelper java at org hibernate engine jdbc spi sqlexceptionhelper convert sqlexceptionhelper java at org hibernate engine jdbc internal proxy connectionproxyhandler continueinvocation connectionproxyhandler java at org hibernate engine jdbc internal proxy abstractproxyhandler invoke abstractproxyhandler java at com sun proxy preparestatement unknown source at org hibernate engine jdbc internal statementpreparerimpl doprepare statementpreparerimpl java at org hibernate engine jdbc internal statementpreparerimpl statementpreparationtemplate preparestatement statementpreparerimpl java at org hibernate engine jdbc internal statementpreparerimpl preparequerystatement statementpreparerimpl java at org hibernate loader loader preparequerystatement loader java at org hibernate loader loader executequerystatement loader java at org hibernate loader loader doquery loader java at org hibernate loader loader doqueryandinitializenonlazycollections loader java at org hibernate loader loader dolist loader java at org hibernate loader loader dolist loader java at org hibernate loader loader listignorequerycache loader java at org hibernate loader loader list loader java at org hibernate loader hql queryloader list queryloader java at org hibernate hql internal ast querytranslatorimpl list querytranslatorimpl java at org hibernate engine query spi hqlqueryplan performlist hqlqueryplan java at org hibernate internal sessionimpl list sessionimpl java at org hibernate internal queryimpl list queryimpl java at org proactive scheduler core db schedulerdbmanager loadjobresult schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager access schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager executework schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager executework schedulerdbmanager java at org proactive scheduler core db transactionhelper runwithouttransaction transactionhelper java at org proactive scheduler core db schedulerdbmanager runwithouttransaction schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager loadtasksresults schedulerdbmanager java at org proactive scheduler core timeddotaskaction call timeddotaskaction java at org proactive scheduler core timeddotaskaction call timeddotaskaction java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java sql sqlsyntaxerrorexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org apache derby impl jdbc getsqlexception unknown source at org apache derby impl jdbc util newembedsqlexception unknown source at org apache derby impl jdbc util seenextexception unknown source at org apache derby impl jdbc transactionresourceimpl wrapinsqlexception unknown source at org apache derby impl jdbc transactionresourceimpl handleexception unknown source at org apache derby impl jdbc embedconnection handleexception unknown source at org apache derby impl jdbc connectionchild handleexception unknown source at org apache derby impl jdbc embedpreparedstatement unknown source at org apache derby impl jdbc unknown source at org apache derby impl jdbc unknown source at org apache derby impl jdbc unknown source at org apache derby jdbc newembedpreparedstatement unknown source at org apache derby impl jdbc embedconnection preparestatement unknown source at org apache derby impl jdbc embedconnection preparestatement unknown source at com mchange impl newproxyconnection preparestatement newproxyconnection java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org hibernate engine jdbc internal proxy connectionproxyhandler continueinvocation connectionproxyhandler java more caused by java sql sqlexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org apache derby impl jdbc sqlexceptionfactory getsqlexception unknown source at org apache derby impl jdbc wrapargsfortransportacrossdrda unknown source more caused by java sql sqlexception java class file format limit s exceeded method code length in generated class org apache derby exe at org apache derby impl jdbc sqlexceptionfactory getsqlexception unknown source at org apache derby impl jdbc wrapargsfortransportacrossdrda unknown source at org apache derby impl jdbc getsqlexception unknown source at org apache derby impl jdbc util generatecssqlexception unknown source at org apache derby impl jdbc transactionresourceimpl wrapinsqlexception unknown source more caused by error java class file format limit s exceeded method code length in generated class org apache derby exe at org apache derby iapi error standardexception newexception unknown source at org apache derby impl services bytecode bcclass getclassbytecode unknown source at org apache derby impl services bytecode gclass getgeneratedclass unknown source at org apache derby impl sql compile expressionclassbuilder getgeneratedclass unknown source at org apache derby impl sql compile statementnode generate unknown source at org apache derby impl sql genericstatement prepminion unknown source at org apache derby impl sql genericstatement prepare unknown source at org apache derby impl sql conn genericlanguageconnectioncontext prepareinternalstatement unknown source more failed to start task db operation failed org proactive db databasemanagerexception db operation failed at org proactive db databasemanagerexceptionhandler handle databasemanagerexceptionhandler java at org proactive scheduler core db transactionhelper runwithouttransaction transactionhelper java at org proactive scheduler core db schedulerdbmanager runwithouttransaction schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager loadtasksresults schedulerdbmanager java at org proactive scheduler core timeddotaskaction call timeddotaskaction java at org proactive scheduler core timeddotaskaction call timeddotaskaction java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org hibernate exception sqlgrammarexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org hibernate exception internal sqlexceptiontypedelegate convert sqlexceptiontypedelegate java at org hibernate exception internal standardsqlexceptionconverter convert standardsqlexceptionconverter java at org hibernate engine jdbc spi sqlexceptionhelper convert sqlexceptionhelper java at org hibernate engine jdbc spi sqlexceptionhelper convert sqlexceptionhelper java at org hibernate engine jdbc internal proxy connectionproxyhandler continueinvocation connectionproxyhandler java at org hibernate engine jdbc internal proxy abstractproxyhandler invoke abstractproxyhandler java at com sun proxy preparestatement unknown source at org hibernate engine jdbc internal statementpreparerimpl doprepare statementpreparerimpl java at org hibernate engine jdbc internal statementpreparerimpl statementpreparationtemplate preparestatement statementpreparerimpl java at org hibernate engine jdbc internal statementpreparerimpl preparequerystatement statementpreparerimpl java at org hibernate loader loader preparequerystatement loader java at org hibernate loader loader executequerystatement loader java at org hibernate loader loader doquery loader java at org hibernate loader loader doqueryandinitializenonlazycollections loader java at org hibernate loader loader dolist loader java at org hibernate loader loader dolist loader java at org hibernate loader loader listignorequerycache loader java at org hibernate loader loader list loader java at org hibernate loader hql queryloader list queryloader java at org hibernate hql internal ast querytranslatorimpl list querytranslatorimpl java at org hibernate engine query spi hqlqueryplan performlist hqlqueryplan java at org hibernate internal sessionimpl list sessionimpl java at org hibernate internal queryimpl list queryimpl java at org proactive scheduler core db schedulerdbmanager loadjobresult schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager access schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager executework schedulerdbmanager java at org proactive scheduler core db schedulerdbmanager executework schedulerdbmanager java at org proactive scheduler core db transactionhelper runwithouttransaction transactionhelper java more caused by java sql sqlsyntaxerrorexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org apache derby impl jdbc getsqlexception unknown source at org apache derby impl jdbc util newembedsqlexception unknown source at org apache derby impl jdbc util seenextexception unknown source at org apache derby impl jdbc transactionresourceimpl wrapinsqlexception unknown source at org apache derby impl jdbc transactionresourceimpl handleexception unknown source at org apache derby impl jdbc embedconnection handleexception unknown source at org apache derby impl jdbc connectionchild handleexception unknown source at org apache derby impl jdbc embedpreparedstatement unknown source at org apache derby impl jdbc unknown source at org apache derby impl jdbc unknown source at org apache derby impl jdbc unknown source at org apache derby jdbc newembedpreparedstatement unknown source at org apache derby impl jdbc embedconnection preparestatement unknown source at org apache derby impl jdbc embedconnection preparestatement unknown source at com mchange impl newproxyconnection preparestatement newproxyconnection java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org hibernate engine jdbc internal proxy connectionproxyhandler continueinvocation connectionproxyhandler java more caused by java sql sqlexception statement too complex try rewriting the query to remove complexity eliminating many duplicate expressions or breaking up the query and storing interim results in a temporary table can often help resolve this error at org apache derby impl jdbc sqlexceptionfactory getsqlexception unknown source at org apache derby impl jdbc wrapargsfortransportacrossdrda unknown source more caused by java sql sqlexception java class file format limit s exceeded method code length in generated class org apache derby exe at org apache derby impl jdbc sqlexceptionfactory getsqlexception unknown source at org apache derby impl jdbc wrapargsfortransportacrossdrda unknown source at org apache derby impl jdbc getsqlexception unknown source at org apache derby impl jdbc util generatecssqlexception unknown source at org apache derby impl jdbc transactionresourceimpl wrapinsqlexception unknown source more caused by error java class file format limit s exceeded method code length in generated class org apache derby exe at org apache derby iapi error standardexception newexception unknown source at org apache derby impl services bytecode bcclass getclassbytecode unknown source at org apache derby impl services bytecode gclass getgeneratedclass unknown source at org apache derby impl sql compile expressionclassbuilder getgeneratedclass unknown source at org apache derby impl sql compile statementnode generate unknown source at org apache derby impl sql genericstatement prepminion unknown source at org apache derby impl sql genericstatement prepare unknown source at org apache derby impl sql conn genericlanguageconnectioncontext prepareinternalstatement unknown source more trying to restart task xml job xmlns xsi xmlns urn proactive jobdescriptor xsi schemalocation urn proactive jobdescriptor name replicate priority normal canceljobonerror false cdata result cdata runs cdata println hello cdata println results
| 0
|
3,305
| 6,401,515,927
|
IssuesEvent
|
2017-08-05 21:43:22
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Release 0.11.0
|
release process
|
**Initial release notes**:
- `RxJava1.x` branch:
- added `WalledGardenInternetObservingStrategy` - fixes #116
- made `WalledGardenInternetObservingStrategy` a default strategy for checking Internet connectivity
- added documentation for NetworkObservingStrategy - solves #197
- added documentation for InternetObservingStrategy - solves #198
- bumped Kotlin version to 1.1.3-2
- bumped Gradle Android Tools version to 2.3.3
- bumped Retrolambda to 3.7.0
- `RxJava2.x` branch:
- added `WalledGardenInternetObservingStrategy` - fixes #116
- made `WalledGardenInternetObservingStrategy` a default strategy for checking Internet connectivity
- added documentation for NetworkObservingStrategy - solves #197
- added documentation for InternetObservingStrategy - solves #198
- fixed package name in `AndroidManifest.xml` file - solves #195
- bumped RxJava2 version to 2.1.2
- bumped Kotlin version to 1.1.3-2
- bumped Gradle Android Tools version to 2.3.3
- bumped Retrolambda to 3.7.0
- increased code coverage with unit tests
**Things to do**:
- [x] RxJava1.x branch:
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
- [x] RxJava2.x branch:
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release 0.11.0 - **Initial release notes**:
- `RxJava1.x` branch:
- added `WalledGardenInternetObservingStrategy` - fixes #116
- made `WalledGardenInternetObservingStrategy` a default strategy for checking Internet connectivity
- added documentation for NetworkObservingStrategy - solves #197
- added documentation for InternetObservingStrategy - solves #198
- bumped Kotlin version to 1.1.3-2
- bumped Gradle Android Tools version to 2.3.3
- bumped Retrolambda to 3.7.0
- `RxJava2.x` branch:
- added `WalledGardenInternetObservingStrategy` - fixes #116
- made `WalledGardenInternetObservingStrategy` a default strategy for checking Internet connectivity
- added documentation for NetworkObservingStrategy - solves #197
- added documentation for InternetObservingStrategy - solves #198
- fixed package name in `AndroidManifest.xml` file - solves #195
- bumped RxJava2 version to 2.1.2
- bumped Kotlin version to 1.1.3-2
- bumped Gradle Android Tools version to 2.3.3
- bumped Retrolambda to 3.7.0
- increased code coverage with unit tests
**Things to do**:
- [x] RxJava1.x branch:
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
- [x] RxJava2.x branch:
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release initial release notes x branch added walledgardeninternetobservingstrategy fixes made walledgardeninternetobservingstrategy a default strategy for checking internet connectivity added documentation for networkobservingstrategy solves added documentation for internetobservingstrategy solves bumped kotlin version to bumped gradle android tools version to bumped retrolambda to x branch added walledgardeninternetobservingstrategy fixes made walledgardeninternetobservingstrategy a default strategy for checking internet connectivity added documentation for networkobservingstrategy solves added documentation for internetobservingstrategy solves fixed package name in androidmanifest xml file solves bumped version to bumped kotlin version to bumped gradle android tools version to bumped retrolambda to increased code coverage with unit tests things to do x branch update javadoc on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release x branch update javadoc on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release
| 1
|
7,802
| 10,959,865,741
|
IssuesEvent
|
2019-11-27 12:22:26
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Preview017 error on "prisma2 dev" or "generate" - 'C:/Program' not a windows command
|
bug/2-confirmed kind/regression process/candidate
|
Hi! I got this error while running Preview017 on windows.
```
C:\projetob\quoro\backend>prisma2 generate
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
Error:
Error: Generator at C:/projetob/quoro/backend/node_modules/@prisma/photon/generator-build/index.js could not start:
'C:/Program' is not recognized as an internal or external command,
operable program or batch file.
at Timeout.setTimeout [as _onTimeout] (C:/Users/RaffaelCampos/AppData/Roaming/npm/node_modules/prisma2/build/index.js:231413:28)
```
My node_modules global folder isn't on Programs and files folder.
I also tested the @alpha.353 release, same error
|
1.0
|
Preview017 error on "prisma2 dev" or "generate" - 'C:/Program' not a windows command - Hi! I got this error while running Preview017 on windows.
```
C:\projetob\quoro\backend>prisma2 generate
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
Error:
Error: Generator at C:/projetob/quoro/backend/node_modules/@prisma/photon/generator-build/index.js could not start:
'C:/Program' is not recognized as an internal or external command,
operable program or batch file.
at Timeout.setTimeout [as _onTimeout] (C:/Users/RaffaelCampos/AppData/Roaming/npm/node_modules/prisma2/build/index.js:231413:28)
```
My node_modules global folder isn't on Programs and files folder.
I also tested the @alpha.353 release, same error
|
process
|
error on dev or generate c program not a windows command hi i got this error while running on windows c projetob quoro backend generate c program is not recognized as an internal or external command operable program or batch file error error generator at c projetob quoro backend node modules prisma photon generator build index js could not start c program is not recognized as an internal or external command operable program or batch file at timeout settimeout c users raffaelcampos appdata roaming npm node modules build index js my node modules global folder isn t on programs and files folder i also tested the alpha release same error
| 1
|
357,269
| 25,176,355,684
|
IssuesEvent
|
2022-11-11 09:36:32
|
WR3nd3/pe
|
https://api.github.com/repos/WR3nd3/pe
|
opened
|
Inconsistent command formatting for Create
|
type.DocumentationBug severity.VeryLow
|
When comparing the 2 images below, one taken from the Creation section, and the other from the Command Summary, we see a difference in parameter format for the mt/ prefix with TIME... and TIME .


<!--session: 1668152325055-f70fb420-7bf1-466c-a38a-19069423213f-->
<!--Version: Web v3.4.4-->
|
1.0
|
Inconsistent command formatting for Create - When comparing the 2 images below, one taken from the Creation section, and the other from the Command Summary, we see a difference in parameter format for the mt/ prefix with TIME... and TIME .


<!--session: 1668152325055-f70fb420-7bf1-466c-a38a-19069423213f-->
<!--Version: Web v3.4.4-->
|
non_process
|
inconsistent command formatting for create when comparing the images below one taken from the creation section and the other from the command summary we see a difference in parameter format for the mt prefix with time and time
| 0
|
606,933
| 18,770,508,286
|
IssuesEvent
|
2021-11-06 18:56:13
|
godaddy-wordpress/coblocks
|
https://api.github.com/repos/godaddy-wordpress/coblocks
|
opened
|
Huge gap between under the caroussel on iphone
|
[Type] Bug [Priority] Low
|
### Describe the bug:
I am getting an issue on the iphone mobile version however . (Firefox on IOS 12)
There is a huge gap between the images and the thumbnails line of smaller images. If there is no thumbnail images, the gap is before the next element.
This gap is not happening on the desktop version.
My settings are :
Size L
Gutter 5
Height 400 px
Remove bottom spacing
### To reproduce:
Please check https://www.filmlabs.org/technical-section/film/
### Expected behavior:
There should be no gap, like an Android.
### Screenshots:
<!-- If applicable, add screenshots to help explain your problem. -->
### Isolating the problem:
<!-- Mark completed items with an [x]. -->
- [ ] This bug happens with no other plugins activated
- [ ] This bug happens with a default WordPress theme active
- [ ] This bug happens **without** the Gutenberg plugin active
- [ ] I can reproduce this bug consistently using the steps above
### WordPress version:
5.7.3
### Gutenberg version:
<!-- if applicable -->
|
1.0
|
Huge gap between under the caroussel on iphone - ### Describe the bug:
I am getting an issue on the iphone mobile version however . (Firefox on IOS 12)
There is a huge gap between the images and the thumbnails line of smaller images. If there is no thumbnail images, the gap is before the next element.
This gap is not happening on the desktop version.
My settings are :
Size L
Gutter 5
Height 400 px
Remove bottom spacing
### To reproduce:
Please check https://www.filmlabs.org/technical-section/film/
### Expected behavior:
There should be no gap, like an Android.
### Screenshots:
<!-- If applicable, add screenshots to help explain your problem. -->
### Isolating the problem:
<!-- Mark completed items with an [x]. -->
- [ ] This bug happens with no other plugins activated
- [ ] This bug happens with a default WordPress theme active
- [ ] This bug happens **without** the Gutenberg plugin active
- [ ] I can reproduce this bug consistently using the steps above
### WordPress version:
5.7.3
### Gutenberg version:
<!-- if applicable -->
|
non_process
|
huge gap between under the caroussel on iphone describe the bug i am getting an issue on the iphone mobile version however firefox on ios there is a huge gap between the images and the thumbnails line of smaller images if there is no thumbnail images the gap is before the next element this gap is not happening on the desktop version my settings are size l gutter height px remove bottom spacing to reproduce please check expected behavior there should be no gap like an android screenshots isolating the problem this bug happens with no other plugins activated this bug happens with a default wordpress theme active this bug happens without the gutenberg plugin active i can reproduce this bug consistently using the steps above wordpress version gutenberg version
| 0
|
355,696
| 10,583,545,347
|
IssuesEvent
|
2019-10-08 13:54:46
|
openshift/installer
|
https://api.github.com/repos/openshift/installer
|
closed
|
libvirt: should support multiple clusters on a single host
|
platform/libvirt priority/backlog
|
This is a tracker for a feature request, and the feature might already be supported — at least, I know CIDRs can be configured in the manifests, and clusters with different names can be defined in parallel and will result in different libvirt resources.
Feel free to close this if it’s known to work; otherwise I’ll continue my investigations and add notes here.
|
1.0
|
libvirt: should support multiple clusters on a single host - This is a tracker for a feature request, and the feature might already be supported — at least, I know CIDRs can be configured in the manifests, and clusters with different names can be defined in parallel and will result in different libvirt resources.
Feel free to close this if it’s known to work; otherwise I’ll continue my investigations and add notes here.
|
non_process
|
libvirt should support multiple clusters on a single host this is a tracker for a feature request and the feature might already be supported — at least i know cidrs can be configured in the manifests and clusters with different names can be defined in parallel and will result in different libvirt resources feel free to close this if it’s known to work otherwise i’ll continue my investigations and add notes here
| 0
|
44,227
| 9,553,528,698
|
IssuesEvent
|
2019-05-02 19:30:12
|
redhat-developer/vscode-java
|
https://api.github.com/repos/redhat-developer/vscode-java
|
closed
|
The quick fix label for generating getter and setter has an unnecessary ellipsis
|
bug code action
|
So if you have 2 fields without accessors, the quickfix to generate them has an ellipsis, yet no wizard is shown as you'd expect from the ellipsis.
<img width="378" alt="Screen Shot 2019-04-29 at 7 05 30 PM" src="https://user-images.githubusercontent.com/148698/56932428-c9dd7400-6ab1-11e9-86c2-5c2e5714ea88.png">
|
1.0
|
The quick fix label for generating getter and setter has an unnecessary ellipsis - So if you have 2 fields without accessors, the quickfix to generate them has an ellipsis, yet no wizard is shown as you'd expect from the ellipsis.
<img width="378" alt="Screen Shot 2019-04-29 at 7 05 30 PM" src="https://user-images.githubusercontent.com/148698/56932428-c9dd7400-6ab1-11e9-86c2-5c2e5714ea88.png">
|
non_process
|
the quick fix label for generating getter and setter has an unnecessary ellipsis so if you have fields without accessors the quickfix to generate them has an ellipsis yet no wizard is shown as you d expect from the ellipsis img width alt screen shot at pm src
| 0
|
46,698
| 19,412,971,744
|
IssuesEvent
|
2021-12-20 11:39:39
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
Consider splitting `azurerm_application_gateway` resource into several smaller ones
|
enhancement service/application-gateway upstream-microsoft
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. --->
I've been working with Application Gateway lately and wanted to defer some responsibility of configuring parts of Application Gateway to other teams. Right now, this is only possible by making the app gateway resource a giant, shared one, which everyone can change. I'd like to split the app gateway into several sub-resources, similar to what happened to IotHub resource: https://github.com/hashicorp/terraform-provider-azurerm/issues/3303
I'd like to be able to define separate resources for backends, frontends, routing rules, health probes. This would reverse the dependencies between the app gateway and backend services, making the tf code more modular. Dependencies between sub-resources would also be much clearer. Now, they are only ensured by the consistent naming of the sections of app_gateway configuration. Splitting the resource would make those dependencies explicit and easier to get wrong.
Deployments would also be independent in a scenario where a single app gateway is shared by multiple teams. The teams can reconfigure the backends or routing as needed and apply their part of the infrastructure change without affecting other teams.
I am willing to gradually implement the resources, but want to get an OK/not OK from the maintainers.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_application_gateway
* azurerm_application_gateway_backend_address_pool
* azurerm_application_gateway_backend_http_settings
* azurerm_application_gateway_probe
* azurerm_application_http_listener
* azurerm_application_request_routing_rule
* azurerm_application_frontend_port
* azurerm_application_frontend_ip_configuration
* azurerm_application_frontend_ssl_certificate
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Example is adopted from [the one provided in `azurerm_application_gateway` documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway#example-usage)
```hcl
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
address_space = ["10.254.0.0/16"]
}
resource "azurerm_subnet" "frontend" {
name = "frontend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.0.0/24"]
}
resource "azurerm_subnet" "backend" {
name = "backend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.2.0/24"]
}
resource "azurerm_public_ip" "example" {
name = "example-pip"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Dynamic"
}
resource "azurerm_application_gateway" "network" {
name = "example-appgateway"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku {
name = "Standard_Small"
tier = "Standard"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
}
resource "application_gateway_frontend_port" "http" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "http"
port = 80
}
resource "application_gateway_frontend_ip_configuration" "feip" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-feip"
public_ip_address_id = azurerm_public_ip.example.id
}
resource "application_gateway_http_listener" "httplstn" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-httplstn"
frontend_ip_configuration_name = application_gateway_frontend_port.feip.name
frontend_port_name = application_gateway_frontend_port.http.name
protocol = "Http"
}
resource "application_gateway_backend_address_pool" "beap" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-beap"
}
resource "application_gateway_backend_http_settings" "be-htst" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-be-htst"
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 60
}
resource "application_gateway_request_routing_rule" "rqrt" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-rqrt"
rule_type = "Basic"
http_listener_name = application_gateway_http_listener.httplstn.name
backend_address_pool_name = application_gateway_backend_address_pool.beap.name
backend_http_settings_name = application_gateway_backend_http_settings.be-htst.name
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #3303
|
1.0
|
Consider splitting `azurerm_application_gateway` resource into several smaller ones - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. --->
I've been working with Application Gateway lately and wanted to defer some responsibility of configuring parts of Application Gateway to other teams. Right now, this is only possible by making the app gateway resource a giant, shared one, which everyone can change. I'd like to split the app gateway into several sub-resources, similar to what happened to IotHub resource: https://github.com/hashicorp/terraform-provider-azurerm/issues/3303
I'd like to be able to define separate resources for backends, frontends, routing rules, health probes. This would reverse the dependencies between the app gateway and backend services, making the tf code more modular. Dependencies between sub-resources would also be much clearer. Now, they are only ensured by the consistent naming of the sections of app_gateway configuration. Splitting the resource would make those dependencies explicit and easier to get wrong.
Deployments would also be independent in a scenario where a single app gateway is shared by multiple teams. The teams can reconfigure the backends or routing as needed and apply their part of the infrastructure change without affecting other teams.
I am willing to gradually implement the resources, but want to get an OK/not OK from the maintainers.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_application_gateway
* azurerm_application_gateway_backend_address_pool
* azurerm_application_gateway_backend_http_settings
* azurerm_application_gateway_probe
* azurerm_application_http_listener
* azurerm_application_request_routing_rule
* azurerm_application_frontend_port
* azurerm_application_frontend_ip_configuration
* azurerm_application_frontend_ssl_certificate
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Example is adopted from [the one provided in `azurerm_application_gateway` documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway#example-usage)
```hcl
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
address_space = ["10.254.0.0/16"]
}
resource "azurerm_subnet" "frontend" {
name = "frontend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.0.0/24"]
}
resource "azurerm_subnet" "backend" {
name = "backend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.2.0/24"]
}
resource "azurerm_public_ip" "example" {
name = "example-pip"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Dynamic"
}
resource "azurerm_application_gateway" "network" {
name = "example-appgateway"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku {
name = "Standard_Small"
tier = "Standard"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
}
resource "application_gateway_frontend_port" "http" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "http"
port = 80
}
resource "application_gateway_frontend_ip_configuration" "feip" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-feip"
public_ip_address_id = azurerm_public_ip.example.id
}
resource "application_gateway_http_listener" "httplstn" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-httplstn"
frontend_ip_configuration_name = application_gateway_frontend_port.feip.name
frontend_port_name = application_gateway_frontend_port.http.name
protocol = "Http"
}
resource "application_gateway_backend_address_pool" "beap" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-beap"
}
resource "application_gateway_backend_http_settings" "be-htst" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-be-htst"
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 60
}
resource "application_gateway_request_routing_rule" "rqrt" {
application_gateway_name = azurerm_application_gateway.network.name
resource_group_name = azurerm_application_gateway.network.resource_group_name
name = "${azurerm_virtual_network.example.name}-rqrt"
rule_type = "Basic"
http_listener_name = application_gateway_http_listener.httplstn.name
backend_address_pool_name = application_gateway_backend_address_pool.beap.name
backend_http_settings_name = application_gateway_backend_http_settings.be-htst.name
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #3303
|
non_process
|
consider splitting azurerm application gateway resource into several smaller ones community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description i ve been working with application gateway lately and wanted to defer some responsibility of configuring parts of application gateway to other teams right now this is only possible by making the app gateway resource a giant shared one which everyone can change i d like to split the app gateway into several sub resources similar to what happened to iothub resource i d like to be able to define separate resources for backends frontends routing rules health probes this would reverse the dependencies between the app gateway and backend services making the tf code more modular dependencies between sub resources would also be much clearer now they are only ensured by the consistent naming of the sections of app gateway configuration splitting the resource would make those dependencies explicit and easier to get wrong deployments would also be independent in a scenario where a single app gateway is shared by multiple teams the teams can reconfigure the backends or routing as needed and apply their part of the infrastructure change without affecting other teams i am willing to gradually implement the resources but want to get an ok not ok from the maintainers new or affected resource s azurerm application gateway azurerm application gateway backend address pool azurerm application gateway backend http settings azurerm application gateway probe azurerm application http listener azurerm application request routing rule azurerm application frontend port azurerm application frontend ip configuration azurerm application frontend ssl certificate potential terraform configuration example is adopted from hcl resource azurerm resource group example name example resources location west europe resource azurerm virtual network example name example network resource group name azurerm resource group example name location azurerm resource group example location address space resource azurerm subnet frontend name frontend resource group name azurerm resource group example name virtual network name azurerm virtual network example name address prefixes resource azurerm subnet backend name backend resource group name azurerm resource group example name virtual network name azurerm virtual network example name address prefixes resource azurerm public ip example name example pip resource group name azurerm resource group example name location azurerm resource group example location allocation method dynamic resource azurerm application gateway network name example appgateway resource group name azurerm resource group example name location azurerm resource group example location sku name standard small tier standard capacity gateway ip configuration name my gateway ip configuration subnet id azurerm subnet frontend id resource application gateway frontend port http application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name http port resource application gateway frontend ip configuration feip application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name azurerm virtual network example name feip public ip address id azurerm public ip example id resource application gateway http listener httplstn application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name azurerm virtual network example name httplstn frontend ip configuration name application gateway frontend port feip name frontend port name application gateway frontend port http name protocol http resource application gateway backend address pool beap application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name azurerm virtual network example name beap resource application gateway backend http settings be htst application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name azurerm virtual network example name be htst cookie based affinity disabled path port protocol http request timeout resource application gateway request routing rule rqrt application gateway name azurerm application gateway network name resource group name azurerm application gateway network resource group name name azurerm virtual network example name rqrt rule type basic http listener name application gateway http listener httplstn name backend address pool name application gateway backend address pool beap name backend http settings name application gateway backend http settings be htst name references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example
| 0
|
295
| 2,732,224,003
|
IssuesEvent
|
2015-04-17 03:04:22
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Packer 0.7.2 is failing on response from vagrantcloud post-processor
|
bug post-processor/vagrant
|
When trying to create a new Virtualbox and push it to vagrant-cloud (note: this worked 20 days ago), I am getting the following:
```
[...]
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packer-virtualbox-iso-1418095977-disk1.vmdk
virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packer-virtualbox-iso-1418095977.ovf
virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
virtualbox-iso (vagrant): Compressing: Vagrantfile
virtualbox-iso (vagrant): Compressing: box.ovf
virtualbox-iso (vagrant): Compressing: metadata.json
virtualbox-iso (vagrant): Compressing: packer-virtualbox-iso-1418095977-disk1.vmdk
==> virtualbox-iso: Running post-processor: vagrant-cloud
==> virtualbox-iso (vagrant-cloud): Verifying box is accessible: waysact/trusty64
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error parsing box response: json: cannot unmarshal string into Go value of type uint
```
This is the configuration for our packer build
```
"post-processors": [
[{
"type": "vagrant",
"compression_level": 9,
"output": "{{user `output_dir`}}/{{user `os_codename`}}{{user `arch`}}.box",
"only": [
"virtualbox-iso"
]
},
{
"type": "vagrant-cloud",
"box_tag": "waysact/{{user `os_codename`}}{{user `arch`}}",
"access_token": "{{user `cloud_token`}}",
"version": "{{user `version`}}",
"only": [
"virtualbox-iso"
]
}]
```
We have a few versions already on vagrant-cloud, and pushing a box to vagrant cloud was working up until 20 days ago (roughly when I did the last `packer build`). The only thing that has changed in between is the `{{user 'version'}}` which is incremented (since we do `apt-get upgrade` etc ).
|
1.0
|
Packer 0.7.2 is failing on response from vagrantcloud post-processor - When trying to create a new Virtualbox and push it to vagrant-cloud (note: this worked 20 days ago), I am getting the following:
```
[...]
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packer-virtualbox-iso-1418095977-disk1.vmdk
virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packer-virtualbox-iso-1418095977.ovf
virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
virtualbox-iso (vagrant): Compressing: Vagrantfile
virtualbox-iso (vagrant): Compressing: box.ovf
virtualbox-iso (vagrant): Compressing: metadata.json
virtualbox-iso (vagrant): Compressing: packer-virtualbox-iso-1418095977-disk1.vmdk
==> virtualbox-iso: Running post-processor: vagrant-cloud
==> virtualbox-iso (vagrant-cloud): Verifying box is accessible: waysact/trusty64
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error parsing box response: json: cannot unmarshal string into Go value of type uint
```
This is the configuration for our packer build
```
"post-processors": [
[{
"type": "vagrant",
"compression_level": 9,
"output": "{{user `output_dir`}}/{{user `os_codename`}}{{user `arch`}}.box",
"only": [
"virtualbox-iso"
]
},
{
"type": "vagrant-cloud",
"box_tag": "waysact/{{user `os_codename`}}{{user `arch`}}",
"access_token": "{{user `cloud_token`}}",
"version": "{{user `version`}}",
"only": [
"virtualbox-iso"
]
}]
```
We have a few versions already on vagrant-cloud, and pushing a box to vagrant cloud was working up until 20 days ago (roughly when I did the last `packer build`). The only thing that has changed in between is the `{{user 'version'}}` which is incremented (since we do `apt-get upgrade` etc ).
|
process
|
packer is failing on response from vagrantcloud post processor when trying to create a new virtualbox and push it to vagrant cloud note this worked days ago i am getting the following virtualbox iso unregistering and deleting virtual machine virtualbox iso running post processor vagrant virtualbox iso vagrant creating vagrant box for virtualbox provider virtualbox iso vagrant copying from artifact output virtualbox iso packer virtualbox iso vmdk virtualbox iso vagrant copying from artifact output virtualbox iso packer virtualbox iso ovf virtualbox iso vagrant renaming the ovf to box ovf virtualbox iso vagrant compressing vagrantfile virtualbox iso vagrant compressing box ovf virtualbox iso vagrant compressing metadata json virtualbox iso vagrant compressing packer virtualbox iso vmdk virtualbox iso running post processor vagrant cloud virtualbox iso vagrant cloud verifying box is accessible waysact build virtualbox iso errored error s occurred post processor failed error parsing box response json cannot unmarshal string into go value of type uint this is the configuration for our packer build post processors type vagrant compression level output user output dir user os codename user arch box only virtualbox iso type vagrant cloud box tag waysact user os codename user arch access token user cloud token version user version only virtualbox iso we have a few versions already on vagrant cloud and pushing a box to vagrant cloud was working up until days ago roughly when i did the last packer build the only thing that has changed in between is the user version which is incremented since we do apt get upgrade etc
| 1
|
181,775
| 14,074,560,087
|
IssuesEvent
|
2020-11-04 07:33:30
|
OpenMined/PySyft
|
https://api.github.com/repos/OpenMined/PySyft
|
closed
|
Add torch.Tensor.q_per_channel_axis to allowlist and test suite
|
Priority: 2 - High :cold_sweat: Severity: 3 - Medium :unamused: Status: Available :wave: Type: New Feature :heavy_plus_sign: Type: Testing :test_tube:
|
# Description
This issue is a part of Syft 0.3.0 Epic 2: https://github.com/OpenMined/PySyft/issues/3696
In this issue, you will be adding support for remote execution of the torch.Tensor.q_per_channel_axis
method or property. This might be a really small project (literally a one-liner) or
it might require adding significant functionality to PySyft OR to the testing suite
in order to make sure the feature is both functional and tested.
## Step 0: Run tests and ./scripts/pre_commit.sh
Before you get started with this project, let's make sure you have everything building and testing
correctly. Clone the codebase and run:
```pip uninstall syft```
followed by
```pip install -e .```
Then run the pre-commit file (which will also run the tests)
```./scripts/pre_commit.sh```
If all of these tests pass, continue on. If not, make sure you have all the
dependencies in requirements.txt installed, etc.
## Step 1: Uncomment your method in the allowlist.py file
Inside [allowlist.py](https://github.com/OpenMined/PySyft/blob/syft_0.3.0/src/syft/lib/torch/allowlist.py) you will find a huge dictionary of methods. Find your method and uncomment the line its on. At the time
of writing this Issue (WARNING: THIS MAY HAVE CHANGED) the dictionary maps from the
string name of the method (in your case 'torch.Tensor.q_per_channel_axis') to the string representation
of the type the method returns.
## Step 2: Run Unit Tests
Run the following:
```python setup.py test```
And wait to see if some of the tests fail. Why might the tests fail now? I'm so glad you asked!
https://github.com/OpenMined/PySyft/blob/syft_0.3.0/tests/syft/lib/torch/tensor/tensor_remote_method_api_suite_test.py
In this file you'll find the torch method test suite. It AUTOMATICALLY loads all methods
from the allowlist.py file you modified in the previous step. It attempts to test them.
# Step 3: If you get a Failing Test
If you get a failing test, this could be for one of a few reasons:
### Reason 1 - The testing suite passed in non-compatible arguments
The testing suite is pretty dumb. It literally just has a permutation of possible
arguments to pass into every method on torch tensors. So, if one of those permutations
doesn't work for your method (aka... perhaps it tries to call your method without
any arguments but torch.Tensor.q_per_channel_axis actually requires some) then the test will
fail if the error hasn't been seen before.
If this happens - don't worry! Just look inside the only test in that file and look
for the huge lists of error types to ignore. Add your error to the list and keep
going!!!
*WARNING:* make sure that the testing suite actually tests your method via remote
execution once you've gotten all the tests passing. Aka - if the testing suite
doesn't have ANY matching argument permutations for your method, then you're
literally creating a bunch of unit tests that do absolutely nothing. If this is the
case, then ADD MORE ARGUMENT TYPES TO THE TESTING SUITE so that your argument
gets run via remote execution. DO NOT CLOSE THIS ISSUE until you can verify that
torch.Tensor.q_per_channel_axis is actually executed remotely inside of a unit tests (and not
skipped). Aka - at least one of the test_all_allowlisted_tensor_methods_work_remotely_on_all_types
unit tests with your method should run ALL THE WAY TO THE END (instead of skipping
the last part.)
*Note:* adding another argument type might require some serialization work if
we don't support arguments of that type yet. If so, this is your job to add it
to the protobuf files in order to close this issue!
### Reason 2 - torch.Tensor.q_per_channel_axis returns a non-supported type
If this happens, you've got a little bit of work in front of you. We don't have
pointer objects to very many remote object types. So, if your method returns anything
other than a single tensor, you probably need to add support for the type it returns
(Such as a bool, None, int, or other types).
*IMPORTANT:* do NOT return the value itself to the end user!!! Return a pointer object
to that type!
*NOTE:* at the time of writing - there are several core pieces of Syft not yet working
to allow you to return any type other than a torch tensor. If you're not comfortable
investigating what those might be - skip this issue and try again later once
someone else has solved these issues.
### Reason 3 - There's something else broken
Chase those stack traces! Talk to friends in Slack. Look at how other methods are supported.
This is a challenging project in a fast moving codebase!
And don't forget - if this project seems to complex - there are plenty of others that
might be easier.
|
2.0
|
Add torch.Tensor.q_per_channel_axis to allowlist and test suite -
# Description
This issue is a part of Syft 0.3.0 Epic 2: https://github.com/OpenMined/PySyft/issues/3696
In this issue, you will be adding support for remote execution of the torch.Tensor.q_per_channel_axis
method or property. This might be a really small project (literally a one-liner) or
it might require adding significant functionality to PySyft OR to the testing suite
in order to make sure the feature is both functional and tested.
## Step 0: Run tests and ./scripts/pre_commit.sh
Before you get started with this project, let's make sure you have everything building and testing
correctly. Clone the codebase and run:
```pip uninstall syft```
followed by
```pip install -e .```
Then run the pre-commit file (which will also run the tests)
```./scripts/pre_commit.sh```
If all of these tests pass, continue on. If not, make sure you have all the
dependencies in requirements.txt installed, etc.
## Step 1: Uncomment your method in the allowlist.py file
Inside [allowlist.py](https://github.com/OpenMined/PySyft/blob/syft_0.3.0/src/syft/lib/torch/allowlist.py) you will find a huge dictionary of methods. Find your method and uncomment the line its on. At the time
of writing this Issue (WARNING: THIS MAY HAVE CHANGED) the dictionary maps from the
string name of the method (in your case 'torch.Tensor.q_per_channel_axis') to the string representation
of the type the method returns.
## Step 2: Run Unit Tests
Run the following:
```python setup.py test```
And wait to see if some of the tests fail. Why might the tests fail now? I'm so glad you asked!
https://github.com/OpenMined/PySyft/blob/syft_0.3.0/tests/syft/lib/torch/tensor/tensor_remote_method_api_suite_test.py
In this file you'll find the torch method test suite. It AUTOMATICALLY loads all methods
from the allowlist.py file you modified in the previous step. It attempts to test them.
# Step 3: If you get a Failing Test
If you get a failing test, this could be for one of a few reasons:
### Reason 1 - The testing suite passed in non-compatible arguments
The testing suite is pretty dumb. It literally just has a permutation of possible
arguments to pass into every method on torch tensors. So, if one of those permutations
doesn't work for your method (aka... perhaps it tries to call your method without
any arguments but torch.Tensor.q_per_channel_axis actually requires some) then the test will
fail if the error hasn't been seen before.
If this happens - don't worry! Just look inside the only test in that file and look
for the huge lists of error types to ignore. Add your error to the list and keep
going!!!
*WARNING:* make sure that the testing suite actually tests your method via remote
execution once you've gotten all the tests passing. Aka - if the testing suite
doesn't have ANY matching argument permutations for your method, then you're
literally creating a bunch of unit tests that do absolutely nothing. If this is the
case, then ADD MORE ARGUMENT TYPES TO THE TESTING SUITE so that your argument
gets run via remote execution. DO NOT CLOSE THIS ISSUE until you can verify that
torch.Tensor.q_per_channel_axis is actually executed remotely inside of a unit tests (and not
skipped). Aka - at least one of the test_all_allowlisted_tensor_methods_work_remotely_on_all_types
unit tests with your method should run ALL THE WAY TO THE END (instead of skipping
the last part.)
*Note:* adding another argument type might require some serialization work if
we don't support arguments of that type yet. If so, this is your job to add it
to the protobuf files in order to close this issue!
### Reason 2 - torch.Tensor.q_per_channel_axis returns a non-supported type
If this happens, you've got a little bit of work in front of you. We don't have
pointer objects to very many remote object types. So, if your method returns anything
other than a single tensor, you probably need to add support for the type it returns
(Such as a bool, None, int, or other types).
*IMPORTANT:* do NOT return the value itself to the end user!!! Return a pointer object
to that type!
*NOTE:* at the time of writing - there are several core pieces of Syft not yet working
to allow you to return any type other than a torch tensor. If you're not comfortable
investigating what those might be - skip this issue and try again later once
someone else has solved these issues.
### Reason 3 - There's something else broken
Chase those stack traces! Talk to friends in Slack. Look at how other methods are supported.
This is a challenging project in a fast moving codebase!
And don't forget - if this project seems to complex - there are plenty of others that
might be easier.
|
non_process
|
add torch tensor q per channel axis to allowlist and test suite description this issue is a part of syft epic in this issue you will be adding support for remote execution of the torch tensor q per channel axis method or property this might be a really small project literally a one liner or it might require adding significant functionality to pysyft or to the testing suite in order to make sure the feature is both functional and tested step run tests and scripts pre commit sh before you get started with this project let s make sure you have everything building and testing correctly clone the codebase and run pip uninstall syft followed by pip install e then run the pre commit file which will also run the tests scripts pre commit sh if all of these tests pass continue on if not make sure you have all the dependencies in requirements txt installed etc step uncomment your method in the allowlist py file inside you will find a huge dictionary of methods find your method and uncomment the line its on at the time of writing this issue warning this may have changed the dictionary maps from the string name of the method in your case torch tensor q per channel axis to the string representation of the type the method returns step run unit tests run the following python setup py test and wait to see if some of the tests fail why might the tests fail now i m so glad you asked in this file you ll find the torch method test suite it automatically loads all methods from the allowlist py file you modified in the previous step it attempts to test them step if you get a failing test if you get a failing test this could be for one of a few reasons reason the testing suite passed in non compatible arguments the testing suite is pretty dumb it literally just has a permutation of possible arguments to pass into every method on torch tensors so if one of those permutations doesn t work for your method aka perhaps it tries to call your method without any arguments but torch tensor q per channel axis actually requires some then the test will fail if the error hasn t been seen before if this happens don t worry just look inside the only test in that file and look for the huge lists of error types to ignore add your error to the list and keep going warning make sure that the testing suite actually tests your method via remote execution once you ve gotten all the tests passing aka if the testing suite doesn t have any matching argument permutations for your method then you re literally creating a bunch of unit tests that do absolutely nothing if this is the case then add more argument types to the testing suite so that your argument gets run via remote execution do not close this issue until you can verify that torch tensor q per channel axis is actually executed remotely inside of a unit tests and not skipped aka at least one of the test all allowlisted tensor methods work remotely on all types unit tests with your method should run all the way to the end instead of skipping the last part note adding another argument type might require some serialization work if we don t support arguments of that type yet if so this is your job to add it to the protobuf files in order to close this issue reason torch tensor q per channel axis returns a non supported type if this happens you ve got a little bit of work in front of you we don t have pointer objects to very many remote object types so if your method returns anything other than a single tensor you probably need to add support for the type it returns such as a bool none int or other types important do not return the value itself to the end user return a pointer object to that type note at the time of writing there are several core pieces of syft not yet working to allow you to return any type other than a torch tensor if you re not comfortable investigating what those might be skip this issue and try again later once someone else has solved these issues reason there s something else broken chase those stack traces talk to friends in slack look at how other methods are supported this is a challenging project in a fast moving codebase and don t forget if this project seems to complex there are plenty of others that might be easier
| 0
|
350,234
| 24,974,323,562
|
IssuesEvent
|
2022-11-02 06:02:13
|
cse110-fa22-group28/cse110-fa22-group28
|
https://api.github.com/repos/cse110-fa22-group28/cse110-fa22-group28
|
closed
|
Roadmap and Pitch Update
|
documentation
|
# Administrative or Organizational Tasks
What is the purpose of this task?
To plan out the project deadlines and compile all discussions into the finalized pitch document.
Steps to complete the task:
- [x] Statement of Purpose for Pitch
- [x] Goals for the product in the pitch document
- [x] Create a milestone board for sprint planning
- [x] Finish up sprint planning
- [x] Summarize sprint details in roadmap
|
1.0
|
Roadmap and Pitch Update - # Administrative or Organizational Tasks
What is the purpose of this task?
To plan out the project deadlines and compile all discussions into the finalized pitch document.
Steps to complete the task:
- [x] Statement of Purpose for Pitch
- [x] Goals for the product in the pitch document
- [x] Create a milestone board for sprint planning
- [x] Finish up sprint planning
- [x] Summarize sprint details in roadmap
|
non_process
|
roadmap and pitch update administrative or organizational tasks what is the purpose of this task to plan out the project deadlines and compile all discussions into the finalized pitch document steps to complete the task statement of purpose for pitch goals for the product in the pitch document create a milestone board for sprint planning finish up sprint planning summarize sprint details in roadmap
| 0
|
180,256
| 6,647,688,088
|
IssuesEvent
|
2017-09-28 05:59:18
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
services.gst.gov.in - site is not usable
|
browser-firefox-mobile priority-normal status-needstriage
|
<!-- @browser: Firefox Mobile 58.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://services.gst.gov.in/services/login
**Browser / Version**: Firefox Mobile 58.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: the website does not let you login. on desktop or mobile
**Steps to Reproduce**:
Gst.gov.in
The website works in all other websites
[](https://webcompat.com/uploads/2017/9/ba99f4fa-ed10-4654-8917-8867bb095525.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
services.gst.gov.in - site is not usable - <!-- @browser: Firefox Mobile 58.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://services.gst.gov.in/services/login
**Browser / Version**: Firefox Mobile 58.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: the website does not let you login. on desktop or mobile
**Steps to Reproduce**:
Gst.gov.in
The website works in all other websites
[](https://webcompat.com/uploads/2017/9/ba99f4fa-ed10-4654-8917-8867bb095525.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
services gst gov in site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description the website does not let you login on desktop or mobile steps to reproduce gst gov in the website works in all other websites from with ❤️
| 0
|
20,158
| 26,713,313,499
|
IssuesEvent
|
2023-01-28 06:28:22
|
rusefi/rusefi_documentation
|
https://api.github.com/repos/rusefi/rusefi_documentation
|
closed
|
Proposal: to avoid "back and forth" "close issue" should be done only by person that opened issue
|
wiki location & process change feedback requested
|
There is the old saying:
**Only the customer defines a job well done**
I've noticed on several occasions that issues had been only partially fixes but the related issued was closed.
In line with quoted old saying
I'm suggestion to make the rule:
**"close issue" should be done only by person that opened issue**
Benefits:
this should avoid "back and forth" on status of issue because the person opening an issue has the time to check on success or failure of a fix. This person will have to respond within a given time (2 weeks seems adequate) otherwise issue will be automatically closed.
If we can agree on this proposal, I'll enhance the contribution guideline and will see if the process can be automated.
|
1.0
|
Proposal: to avoid "back and forth" "close issue" should be done only by person that opened issue - There is the old saying:
**Only the customer defines a job well done**
I've noticed on several occasions that issues had been only partially fixes but the related issued was closed.
In line with quoted old saying
I'm suggestion to make the rule:
**"close issue" should be done only by person that opened issue**
Benefits:
this should avoid "back and forth" on status of issue because the person opening an issue has the time to check on success or failure of a fix. This person will have to respond within a given time (2 weeks seems adequate) otherwise issue will be automatically closed.
If we can agree on this proposal, I'll enhance the contribution guideline and will see if the process can be automated.
|
process
|
proposal to avoid back and forth close issue should be done only by person that opened issue there is the old saying only the customer defines a job well done i ve noticed on several occasions that issues had been only partially fixes but the related issued was closed in line with quoted old saying i m suggestion to make the rule close issue should be done only by person that opened issue benefits this should avoid back and forth on status of issue because the person opening an issue has the time to check on success or failure of a fix this person will have to respond within a given time weeks seems adequate otherwise issue will be automatically closed if we can agree on this proposal i ll enhance the contribution guideline and will see if the process can be automated
| 1
|
180,477
| 30,508,492,139
|
IssuesEvent
|
2023-07-18 18:48:42
|
GCTC-NTGC/gc-digital-talent
|
https://api.github.com/repos/GCTC-NTGC/gc-digital-talent
|
closed
|
Increase table scroll affordance when table width exceeds screen real estate
|
feature blocked: design
|
<sup>ℹ️ [Figma (root)][figma] | [Sitemap][sitemap]</sup>
<sup>_This initial comment is collaborative and open to modification by all._</sup>
## Description
Certain tables have a tendency to exceed the available width of the viewport, and this creates a horizontal scrollbar that provides access to the rest of the table. It's not immediately obvious in some cases that this is possible, so the intent is to add a visual indicator to tables when a scrollbar is available that will make it clearer.
### Optional ideas that would benefit from design/dev feedback
- adding visible helper text?
- providing context on the SHIFT + scroll keyboard shortcut? (not sure if this is universally applicable across OSes / browsers)
- adding an icon?
## Screenshot: Prototype

## Acceptance Criteria
- [ ] a shadowed pseudo/child element should appear on the right-hand side of the table when the width of the table exceeds the width of its parent wrapper
- [ ] the shadowed element should appear/disappear when the screen is resized IF the table's wrapper becomes large enough to display the whole table
- [ ] position it within the table's parent, on top of the table so that it fills the height of the parent, and about 10% of the width
- [ ] the shadowed element should use `pointer-events: none;` to avoid conflicting with table elements/buttons
- [ ] we might want to add screen-reader only text in the same fashion that indicates the table is scrollable when it first becomes focused
## Context
From @esizer
>I think it could be similar to our hook to determine screen size and render it if that is true.
>
>You can change it to be:
>`div.scrollWidth > div.clientWidth`
>
>https://github.com/GCTC-NTGC/gc-digital-talent/blob/main/frontend/common/src/hooks/useIsSmallScreen.ts
<!-- Links -->
[image]: https://user-images.githubusercontent.com/305339/166570935-6425a615-82c7-4b2f-9840-5c7d698fa2ab.png
[image-link]: https://example.com
[figma]: https://go.talent.c4nada.ca/figma
[sitemap]: https://go.talent.c4nada.ca/sitemap
<!-- Find your prototype via https://go.talent.c4nada.ca/figma and click "play button" ▷ in top-right. -->
|
1.0
|
Increase table scroll affordance when table width exceeds screen real estate - <sup>ℹ️ [Figma (root)][figma] | [Sitemap][sitemap]</sup>
<sup>_This initial comment is collaborative and open to modification by all._</sup>
## Description
Certain tables have a tendency to exceed the available width of the viewport, and this creates a horizontal scrollbar that provides access to the rest of the table. It's not immediately obvious in some cases that this is possible, so the intent is to add a visual indicator to tables when a scrollbar is available that will make it clearer.
### Optional ideas that would benefit from design/dev feedback
- adding visible helper text?
- providing context on the SHIFT + scroll keyboard shortcut? (not sure if this is universally applicable across OSes / browsers)
- adding an icon?
## Screenshot: Prototype

## Acceptance Criteria
- [ ] a shadowed pseudo/child element should appear on the right-hand side of the table when the width of the table exceeds the width of its parent wrapper
- [ ] the shadowed element should appear/disappear when the screen is resized IF the table's wrapper becomes large enough to display the whole table
- [ ] position it within the table's parent, on top of the table so that it fills the height of the parent, and about 10% of the width
- [ ] the shadowed element should use `pointer-events: none;` to avoid conflicting with table elements/buttons
- [ ] we might want to add screen-reader only text in the same fashion that indicates the table is scrollable when it first becomes focused
## Context
From @esizer
>I think it could be similar to our hook to determine screen size and render it if that is true.
>
>You can change it to be:
>`div.scrollWidth > div.clientWidth`
>
>https://github.com/GCTC-NTGC/gc-digital-talent/blob/main/frontend/common/src/hooks/useIsSmallScreen.ts
<!-- Links -->
[image]: https://user-images.githubusercontent.com/305339/166570935-6425a615-82c7-4b2f-9840-5c7d698fa2ab.png
[image-link]: https://example.com
[figma]: https://go.talent.c4nada.ca/figma
[sitemap]: https://go.talent.c4nada.ca/sitemap
<!-- Find your prototype via https://go.talent.c4nada.ca/figma and click "play button" ▷ in top-right. -->
|
non_process
|
increase table scroll affordance when table width exceeds screen real estate ℹ️ this initial comment is collaborative and open to modification by all description certain tables have a tendency to exceed the available width of the viewport and this creates a horizontal scrollbar that provides access to the rest of the table it s not immediately obvious in some cases that this is possible so the intent is to add a visual indicator to tables when a scrollbar is available that will make it clearer optional ideas that would benefit from design dev feedback adding visible helper text providing context on the shift scroll keyboard shortcut not sure if this is universally applicable across oses browsers adding an icon screenshot prototype acceptance criteria a shadowed pseudo child element should appear on the right hand side of the table when the width of the table exceeds the width of its parent wrapper the shadowed element should appear disappear when the screen is resized if the table s wrapper becomes large enough to display the whole table position it within the table s parent on top of the table so that it fills the height of the parent and about of the width the shadowed element should use pointer events none to avoid conflicting with table elements buttons we might want to add screen reader only text in the same fashion that indicates the table is scrollable when it first becomes focused context from esizer i think it could be similar to our hook to determine screen size and render it if that is true you can change it to be div scrollwidth div clientwidth
| 0
|
7,828
| 11,008,133,644
|
IssuesEvent
|
2019-12-04 09:56:10
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
Spring hateoas show title ="" even the property title of Link is null
|
in: mediatypes process: waiting for feedback
|
Hi,
I am using spring-hateoas 1.0.1.RELEASE and spring boot 2.2.1.RELEASE.
I have a simple Person class:
```@AllArgsConstructor
@Getter
public class Person {
private String id;
private String name;
}
```
Assembler:
```
@Component
public class PersonAssembler implements SimpleRepresentationModelAssembler<Person> {
private final Class<?> controller;
public PersonAssembler() {
this.controller = PersonController.class;
}
@Override
public void addLinks(EntityModel<Person> resource) {
resource.add(WebMvcLinkBuilder.linkTo(controller).slash(resource.getContent().getId()).withSelfRel());
}
@Override
public void addLinks(CollectionModel<EntityModel<Person>> resources) {
}
}
```
and Controller:
```
@RestController
@RequestMapping("/people")
@Tag(name = "People Rest API", description = "People Rest API")
public class PersonController {
private final PersonAssembler personAssembler;
public PersonController(PersonAssembler personAssembler) {
this.personAssembler = personAssembler;
}
@GetMapping(value = "/{personId}")
@Operation(summary = "get person")
public EntityModel<Person> getPerson(@PathVariable String personId) {
Person person = new Person(personId, "test");
return personAssembler.toModel(person);
}
}
```
When I send request to get a person, the response body show:
```
{
"id": "1",
"name": "test",
"_links": {
"self": {
"href": "http://localhost:8374/people/1",
"title": ""
}
}
}
```
I have no clue where is title part comes from. I tried to debug and check the Link object and also see the property title of Link is null. Please take a look. Thank you
|
1.0
|
Spring hateoas show title ="" even the property title of Link is null - Hi,
I am using spring-hateoas 1.0.1.RELEASE and spring boot 2.2.1.RELEASE.
I have a simple Person class:
```@AllArgsConstructor
@Getter
public class Person {
private String id;
private String name;
}
```
Assembler:
```
@Component
public class PersonAssembler implements SimpleRepresentationModelAssembler<Person> {
private final Class<?> controller;
public PersonAssembler() {
this.controller = PersonController.class;
}
@Override
public void addLinks(EntityModel<Person> resource) {
resource.add(WebMvcLinkBuilder.linkTo(controller).slash(resource.getContent().getId()).withSelfRel());
}
@Override
public void addLinks(CollectionModel<EntityModel<Person>> resources) {
}
}
```
and Controller:
```
@RestController
@RequestMapping("/people")
@Tag(name = "People Rest API", description = "People Rest API")
public class PersonController {
private final PersonAssembler personAssembler;
public PersonController(PersonAssembler personAssembler) {
this.personAssembler = personAssembler;
}
@GetMapping(value = "/{personId}")
@Operation(summary = "get person")
public EntityModel<Person> getPerson(@PathVariable String personId) {
Person person = new Person(personId, "test");
return personAssembler.toModel(person);
}
}
```
When I send request to get a person, the response body show:
```
{
"id": "1",
"name": "test",
"_links": {
"self": {
"href": "http://localhost:8374/people/1",
"title": ""
}
}
}
```
I have no clue where is title part comes from. I tried to debug and check the Link object and also see the property title of Link is null. Please take a look. Thank you
|
process
|
spring hateoas show title even the property title of link is null hi i am using spring hateoas release and spring boot release i have a simple person class allargsconstructor getter public class person private string id private string name assembler component public class personassembler implements simplerepresentationmodelassembler private final class controller public personassembler this controller personcontroller class override public void addlinks entitymodel resource resource add webmvclinkbuilder linkto controller slash resource getcontent getid withselfrel override public void addlinks collectionmodel resources and controller restcontroller requestmapping people tag name people rest api description people rest api public class personcontroller private final personassembler personassembler public personcontroller personassembler personassembler this personassembler personassembler getmapping value personid operation summary get person public entitymodel getperson pathvariable string personid person person new person personid test return personassembler tomodel person when i send request to get a person the response body show id name test links self href title i have no clue where is title part comes from i tried to debug and check the link object and also see the property title of link is null please take a look thank you
| 1
|
8,456
| 11,631,060,157
|
IssuesEvent
|
2020-02-28 00:08:58
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Autograd fails if used before multiprocessing Pool
|
actionable fixathon module: autograd module: multiprocessing topic: deadlock triaged
|
## 🐛 Bug
Just like [#3966](https://github.com/pytorch/pytorch/issues/3966) autograd fails when using it on main process before a child process but not for normally started processes but for process pools.
## To Reproduce
```python
import torch
from torch import multiprocessing as mp
FAIL = True
def f(a=1):
torch.rand(3).requires_grad_(True).mean().backward()
return a ** 2
if FAIL:
f()
# This always works
p = mp.Process(target=f)
p.start()
p.join()
# This fails if autograd has been used
with mp.Pool(3) as pool:
result = pool.map(f, [1, 2, 3])
print(result)
```
## Expected Behavior
This can be fixed by adding
```python
if __name__ == '__main__':
mp.set_start_method("spawn")
```
As using a normal mp.Process works without this addition (after a fix for the issue described in [#3966](https://github.com/pytorch/pytorch/issues/3966)) I am unsure if this is a bug or not. Adding the snippet above to your code can be hard for more interactive uses such as jupyter notebook.
## Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 418.56
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] numpydoc==0.9.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py37_cu101 pytorch
cc @ezyang @SsnL @albanD @zou3519 @gqchen
|
1.0
|
Autograd fails if used before multiprocessing Pool - ## 🐛 Bug
Just like [#3966](https://github.com/pytorch/pytorch/issues/3966) autograd fails when using it on main process before a child process but not for normally started processes but for process pools.
## To Reproduce
```python
import torch
from torch import multiprocessing as mp
FAIL = True
def f(a=1):
torch.rand(3).requires_grad_(True).mean().backward()
return a ** 2
if FAIL:
f()
# This always works
p = mp.Process(target=f)
p.start()
p.join()
# This fails if autograd has been used
with mp.Pool(3) as pool:
result = pool.map(f, [1, 2, 3])
print(result)
```
## Expected Behavior
This can be fixed by adding
```python
if __name__ == '__main__':
mp.set_start_method("spawn")
```
As using a normal mp.Process works without this addition (after a fix for the issue described in [#3966](https://github.com/pytorch/pytorch/issues/3966)) I am unsure if this is a bug or not. Adding the snippet above to your code can be hard for more interactive uses such as jupyter notebook.
## Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 418.56
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] numpydoc==0.9.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py37_cu101 pytorch
cc @ezyang @SsnL @albanD @zou3519 @gqchen
|
process
|
autograd fails if used before multiprocessing pool 🐛 bug just like autograd fails when using it on main process before a child process but not for normally started processes but for process pools to reproduce python import torch from torch import multiprocessing as mp fail true def f a torch rand requires grad true mean backward return a if fail f this always works p mp process target f p start p join this fails if autograd has been used with mp pool as pool result pool map f print result expected behavior this can be fixed by adding python if name main mp set start method spawn as using a normal mp process works without this addition after a fix for the issue described in i am unsure if this is a bug or not adding the snippet above to your code can be hard for more interactive uses such as jupyter notebook environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version could not collect python version is cuda available yes cuda runtime version gpu models and configuration gpu geforce gtx ti gpu geforce gtx ti nvidia driver version cudnn version could not collect versions of relevant libraries numpy numpydoc torch torchvision blas mkl mkl mkl service mkl fft mkl random pytorch pytorch torchvision pytorch cc ezyang ssnl alband gqchen
| 1
|
6,766
| 9,905,540,818
|
IssuesEvent
|
2019-06-27 11:52:12
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Batch processing a graphical modeller model results in AttributeError: 'NoneType' object has no attribute 'parameterValue'
|
Bug Processing
|
When a self-made model created in the graphical modeller is run in batch processing mode, it throws "Python error : An error has occurred while executing Python code: See message log (Python Error) for more details". The message log, nor the batch processing window log show anything, but when Stack Trace is clicked, the following window pops up:
AttributeError: 'NoneType' object has no attribute 'parameterValue'
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis-ltr/./python/plugins\processing\gui\BatchAlgorithmDialog.py", line 88, in runAlgorithm
parameters[param.name()] = wrapper.parameterValue()
AttributeError: 'NoneType' object has no attribute 'parameterValue'
When the model is run on single files, i.e. not in batch mode, it functions as expected (tested for all files which were to be included in the batch). Batch processing a simpler model (input - difference algorithm - output) throws the same error. Batch processing QGIS tools also seems to work fine, although I only tried a few.
A screetshot of the model with which I initially ran into the error:

Saving the output does works in neither .shp nor .gpkg.
QGIS version
3.4.8-Madeira
QGIS code revision
04ee8e0761
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
2.4.1
Running against GDAL/OGR
2.4.1
Compiled against GEOS
3.7.2-CAPI-1.11.0
Running against GEOS
3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version
9.2.4
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
520
Running against PROJ
5.2.0
Python version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
|
1.0
|
Batch processing a graphical modeller model results in AttributeError: 'NoneType' object has no attribute 'parameterValue' - When a self-made model created in the graphical modeller is run in batch processing mode, it throws "Python error : An error has occurred while executing Python code: See message log (Python Error) for more details". The message log, nor the batch processing window log show anything, but when Stack Trace is clicked, the following window pops up:
AttributeError: 'NoneType' object has no attribute 'parameterValue'
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis-ltr/./python/plugins\processing\gui\BatchAlgorithmDialog.py", line 88, in runAlgorithm
parameters[param.name()] = wrapper.parameterValue()
AttributeError: 'NoneType' object has no attribute 'parameterValue'
When the model is run on single files, i.e. not in batch mode, it functions as expected (tested for all files which were to be included in the batch). Batch processing a simpler model (input - difference algorithm - output) throws the same error. Batch processing QGIS tools also seems to work fine, although I only tried a few.
A screetshot of the model with which I initially ran into the error:

Saving the output does works in neither .shp nor .gpkg.
QGIS version
3.4.8-Madeira
QGIS code revision
04ee8e0761
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
2.4.1
Running against GDAL/OGR
2.4.1
Compiled against GEOS
3.7.2-CAPI-1.11.0
Running against GEOS
3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version
9.2.4
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
520
Running against PROJ
5.2.0
Python version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
|
process
|
batch processing a graphical modeller model results in attributeerror nonetype object has no attribute parametervalue when a self made model created in the graphical modeller is run in batch processing mode it throws python error an error has occurred while executing python code see message log python error for more details the message log nor the batch processing window log show anything but when stack trace is clicked the following window pops up attributeerror nonetype object has no attribute parametervalue traceback most recent call last file c progra apps qgis ltr python plugins processing gui batchalgorithmdialog py line in runalgorithm parameters wrapper parametervalue attributeerror nonetype object has no attribute parametervalue when the model is run on single files i e not in batch mode it functions as expected tested for all files which were to be included in the batch batch processing a simpler model input difference algorithm output throws the same error batch processing qgis tools also seems to work fine although i only tried a few a screetshot of the model with which i initially ran into the error saving the output does works in neither shp nor gpkg qgis version madeira qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi postgresql client version spatialite version qwt version version compiled against proj running against proj python version jun
| 1
|
37,870
| 2,831,695,944
|
IssuesEvent
|
2015-05-24 21:12:27
|
HellscreamWoW/Tracker
|
https://api.github.com/repos/HellscreamWoW/Tracker
|
closed
|
NPC: Monty
|
Priority-Normal Type-Creature
|
There is an issue with this npc, it's duplicated, it's like two of the same npc sitting into each other. Monty is in deeprun tram.
|
1.0
|
NPC: Monty - There is an issue with this npc, it's duplicated, it's like two of the same npc sitting into each other. Monty is in deeprun tram.
|
non_process
|
npc monty there is an issue with this npc it s duplicated it s like two of the same npc sitting into each other monty is in deeprun tram
| 0
|
6,218
| 9,150,867,002
|
IssuesEvent
|
2019-02-28 00:03:55
|
ArctosDB/new-collections
|
https://api.github.com/repos/ArctosDB/new-collections
|
closed
|
UNR Draft MOU
|
MOU draft in process
|
Work with new collection to complete Draft MOU, answer any questions about migration, Arctos operating procedures, and costs; (download sample template include collection contact info).
|
1.0
|
UNR Draft MOU - Work with new collection to complete Draft MOU, answer any questions about migration, Arctos operating procedures, and costs; (download sample template include collection contact info).
|
process
|
unr draft mou work with new collection to complete draft mou answer any questions about migration arctos operating procedures and costs download sample template include collection contact info
| 1
|
204,340
| 15,438,926,134
|
IssuesEvent
|
2021-03-07 22:12:45
|
trevorNgo/Measure2.0
|
https://api.github.com/repos/trevorNgo/Measure2.0
|
opened
|
CS4ZP6 Tester Feedback: Clicking "View Past Jobs" under the "Start New Year Term" section does not do anything
|
tester
|
**Description:** Clicking the **View Past Jobs** button under the **Start New Year Term** on the homepage when logged in as an **Admin** does not do anything.
**OS:** Windows 10 Enterprise
**Browser:** Chrome Version 89.0.4389.82
**Reproduction steps:**
* Sign in as an **Admin**
* Click on **View Past Jobs** under the **Start New Year Term** section
**Expected result:**
Past jobs should become visible.
**Actual result:**
Nothing happens when the button is clicked

|
1.0
|
CS4ZP6 Tester Feedback: Clicking "View Past Jobs" under the "Start New Year Term" section does not do anything - **Description:** Clicking the **View Past Jobs** button under the **Start New Year Term** on the homepage when logged in as an **Admin** does not do anything.
**OS:** Windows 10 Enterprise
**Browser:** Chrome Version 89.0.4389.82
**Reproduction steps:**
* Sign in as an **Admin**
* Click on **View Past Jobs** under the **Start New Year Term** section
**Expected result:**
Past jobs should become visible.
**Actual result:**
Nothing happens when the button is clicked

|
non_process
|
tester feedback clicking view past jobs under the start new year term section does not do anything description clicking the view past jobs button under the start new year term on the homepage when logged in as an admin does not do anything os windows enterprise browser chrome version reproduction steps sign in as an admin click on view past jobs under the start new year term section expected result past jobs should become visible actual result nothing happens when the button is clicked
| 0
|
221,684
| 17,364,895,119
|
IssuesEvent
|
2021-07-30 05:22:51
|
bitpodio/bitpodjs
|
https://api.github.com/repos/bitpodio/bitpodjs
|
closed
|
My Registration=>navigates to old "Reg not found" page if reload is done after email is edited
|
Bug Minor My Registration New Resolved-Accepted in Test Env rls_01-07-21
|
-Go to "My Registrations" page
-Enter valid details for Reg no & Email
-In another tab, edit the email address of the registration
-Reload the My Registration page=>Navigates to old "Registration not found" page
-Expected behavior as discussed with lokesh=> Default Parent page (My Registration) should be displayed with all fields reset
|
1.0
|
My Registration=>navigates to old "Reg not found" page if reload is done after email is edited - -Go to "My Registrations" page
-Enter valid details for Reg no & Email
-In another tab, edit the email address of the registration
-Reload the My Registration page=>Navigates to old "Registration not found" page
-Expected behavior as discussed with lokesh=> Default Parent page (My Registration) should be displayed with all fields reset
|
non_process
|
my registration navigates to old reg not found page if reload is done after email is edited go to my registrations page enter valid details for reg no email in another tab edit the email address of the registration reload the my registration page navigates to old registration not found page expected behavior as discussed with lokesh default parent page my registration should be displayed with all fields reset
| 0
|
113,290
| 9,635,156,563
|
IssuesEvent
|
2019-05-15 23:48:50
|
SpongePowered/Ore
|
https://api.github.com/repos/SpongePowered/Ore
|
closed
|
Manual (web-interface) and automatic (Gradle, API) deploy have different minimum character limit for Changelog/Release Bulletin.
|
status: needs testing type: bug report
|
**Describe the bug**
Manual (web-interface) and automatic (Gradle, API) deploy have different minimum character limit for Changelog/Release Bulletin. Web-interface allows to have any minimum length, but if you try to automatically deploy through gradle, it will error out with "Content too short" message.
**To Reproduce**
Steps to reproduce the behavior:
1. Try to manually add version to project. In "New project release" page, edit "Release Bulletin" section. Type any text with 1-14 character length (For example, "1"). Site will allow it.
2. Open or create OreTestPlugin project
3. Add or change `changelog='1'` (or any string with length of 1-14 characters) to oreDeploy section.
4. Execute `oreDeploy` task.
5. See error:
```
:oreDeploy
Publishing oretest to https://ore.spongepowered.org.
Recommended: false
Channel: snapshot
[failure] 400 Bad Request
* changelog
- Content too short.
:oreDeploy FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':oreDeploy'.
> Deployment failed.
```
**Expected behavior**
A consistent minimum character limits in both web-interface and API.
**Environment**
Any
**Additional context**
n/a
|
1.0
|
Manual (web-interface) and automatic (Gradle, API) deploy have different minimum character limit for Changelog/Release Bulletin. - **Describe the bug**
Manual (web-interface) and automatic (Gradle, API) deploy have different minimum character limit for Changelog/Release Bulletin. Web-interface allows to have any minimum length, but if you try to automatically deploy through gradle, it will error out with "Content too short" message.
**To Reproduce**
Steps to reproduce the behavior:
1. Try to manually add version to project. In "New project release" page, edit "Release Bulletin" section. Type any text with 1-14 character length (For example, "1"). Site will allow it.
2. Open or create OreTestPlugin project
3. Add or change `changelog='1'` (or any string with length of 1-14 characters) to oreDeploy section.
4. Execute `oreDeploy` task.
5. See error:
```
:oreDeploy
Publishing oretest to https://ore.spongepowered.org.
Recommended: false
Channel: snapshot
[failure] 400 Bad Request
* changelog
- Content too short.
:oreDeploy FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':oreDeploy'.
> Deployment failed.
```
**Expected behavior**
A consistent minimum character limits in both web-interface and API.
**Environment**
Any
**Additional context**
n/a
|
non_process
|
manual web interface and automatic gradle api deploy have different minimum character limit for changelog release bulletin describe the bug manual web interface and automatic gradle api deploy have different minimum character limit for changelog release bulletin web interface allows to have any minimum length but if you try to automatically deploy through gradle it will error out with content too short message to reproduce steps to reproduce the behavior try to manually add version to project in new project release page edit release bulletin section type any text with character length for example site will allow it open or create oretestplugin project add or change changelog or any string with length of characters to oredeploy section execute oredeploy task see error oredeploy publishing oretest to recommended false channel snapshot bad request changelog content too short oredeploy failed failure build failed with an exception what went wrong execution failed for task oredeploy deployment failed expected behavior a consistent minimum character limits in both web interface and api environment any additional context n a
| 0
|
2,144
| 4,996,193,420
|
IssuesEvent
|
2016-12-09 13:00:52
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [eng] Réunion publique à Bordeaux
|
Language: English Process: [2] Ready for review (1)
|
# Video title
Réunion publique à Bordeaux
# URL
https://www.youtube.com/watch?v=boAJ1tTpW0s
# Youtube subtitle language
English
# Duration
1:48:52
# URL subtitles
https://www.youtube.com/timedtext_editor?lang=en&ui=hd&v=boAJ1tTpW0s&ref=player&action_mde_edit_form=1&tab=captions&bl=vmp
|
1.0
|
[subtitles] [eng] Réunion publique à Bordeaux - # Video title
Réunion publique à Bordeaux
# URL
https://www.youtube.com/watch?v=boAJ1tTpW0s
# Youtube subtitle language
English
# Duration
1:48:52
# URL subtitles
https://www.youtube.com/timedtext_editor?lang=en&ui=hd&v=boAJ1tTpW0s&ref=player&action_mde_edit_form=1&tab=captions&bl=vmp
|
process
|
réunion publique à bordeaux video title réunion publique à bordeaux url youtube subtitle language english duration url subtitles
| 1
|
10,959
| 13,764,378,319
|
IssuesEvent
|
2020-10-07 11:59:33
|
googleapis/java-bigtable-hbase
|
https://api.github.com/repos/googleapis/java-bigtable-hbase
|
closed
|
Add kokoro job for sequencefile importer
|
api: bigtable type: process
|
To prevent #2366 from re-occurring, we need to have a kokoro test for the sequencefileIntegrationTest profile
something like:
mvn verify \
-pl bigtable-dataflow-parent/bigtable-beam-import \
-Dgoogle.bigtable.instance.id="int-tst" \
-Dgoogle.bigtable.project.id="PROJECT_ID" \
-Dgoogle.dataflow.stagingLocation="gs://SOME_GCS_PATH_TO_USE_FOR_TEMP_FILES" \
-Dcloud.test.data.folder="gs://SOME_GCS_PATH_TO_USER_FOR_TEMP_DATA_FILES/" \
-PsequencefileIntegrationTest
|
1.0
|
Add kokoro job for sequencefile importer - To prevent #2366 from re-occurring, we need to have a kokoro test for the sequencefileIntegrationTest profile
something like:
mvn verify \
-pl bigtable-dataflow-parent/bigtable-beam-import \
-Dgoogle.bigtable.instance.id="int-tst" \
-Dgoogle.bigtable.project.id="PROJECT_ID" \
-Dgoogle.dataflow.stagingLocation="gs://SOME_GCS_PATH_TO_USE_FOR_TEMP_FILES" \
-Dcloud.test.data.folder="gs://SOME_GCS_PATH_TO_USER_FOR_TEMP_DATA_FILES/" \
-PsequencefileIntegrationTest
|
process
|
add kokoro job for sequencefile importer to prevent from re occurring we need to have a kokoro test for the sequencefileintegrationtest profile something like mvn verify pl bigtable dataflow parent bigtable beam import dgoogle bigtable instance id int tst dgoogle bigtable project id project id dgoogle dataflow staginglocation gs some gcs path to use for temp files dcloud test data folder gs some gcs path to user for temp data files psequencefileintegrationtest
| 1
|
5,712
| 2,610,213,919
|
IssuesEvent
|
2015-02-26 19:08:17
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
попки иписьки
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Ананий Фёдоров'''
Привет всем не подскажите где можно найти
.попки иписьки. как то выкладывали уже
'''Воин Воронов'''
Качай тут http://bit.ly/HrOecI
'''Ванадий Афанасьев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Адам Белоусов'''
Неа все ок у меня ничего не списало
'''Арнольд Крюков'''
Неа все ок у меня ничего не списало
Информация о файле: попки иписьки
Загружен: В этом месяце
Скачан раз: 200
Рейтинг: 1478
Средняя скорость скачивания: 611
Похожих файлов: 40
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 12:14
|
1.0
|
попки иписьки - ```
'''Ананий Фёдоров'''
Привет всем не подскажите где можно найти
.попки иписьки. как то выкладывали уже
'''Воин Воронов'''
Качай тут http://bit.ly/HrOecI
'''Ванадий Афанасьев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Адам Белоусов'''
Неа все ок у меня ничего не списало
'''Арнольд Крюков'''
Неа все ок у меня ничего не списало
Информация о файле: попки иписьки
Загружен: В этом месяце
Скачан раз: 200
Рейтинг: 1478
Средняя скорость скачивания: 611
Похожих файлов: 40
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 12:14
|
non_process
|
попки иписьки ананий фёдоров привет всем не подскажите где можно найти попки иписьки как то выкладывали уже воин воронов качай тут ванадий афанасьев просит ввести номер мобилы не опасно ли это адам белоусов неа все ок у меня ничего не списало арнольд крюков неа все ок у меня ничего не списало информация о файле попки иписьки загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 0
|
1,723
| 4,380,975,994
|
IssuesEvent
|
2016-08-06 00:36:27
|
bpython/bpython
|
https://api.github.com/repos/bpython/bpython
|
closed
|
bpython run greenlet bug
|
requires-separate-process
|
```
from greenlet import getcurrent
getcurrent()
getcurrent()
getcurrent()
```
run the code above, even I call `getcurrent` for 3(or more) times, it will get the same obj in `python` shell and `ipython` shell, but it failed in `bpython`, it'll give more than one different greenlet, so it'll cause bugs like [flask context error](https://github.com/smurfix/flask-script/issues/155)
|
1.0
|
bpython run greenlet bug - ```
from greenlet import getcurrent
getcurrent()
getcurrent()
getcurrent()
```
run the code above, even I call `getcurrent` for 3(or more) times, it will get the same obj in `python` shell and `ipython` shell, but it failed in `bpython`, it'll give more than one different greenlet, so it'll cause bugs like [flask context error](https://github.com/smurfix/flask-script/issues/155)
|
process
|
bpython run greenlet bug from greenlet import getcurrent getcurrent getcurrent getcurrent run the code above even i call getcurrent for or more times it will get the same obj in python shell and ipython shell but it failed in bpython it ll give more than one different greenlet so it ll cause bugs like
| 1
|
17,661
| 23,480,753,139
|
IssuesEvent
|
2022-08-17 10:19:42
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
Deprecation of package ioutil in Go 1.16
|
*In process #Opt
|
the `ioutil` functions need to be changed to `io` packages
`
"io/ioutil" has been deprecated since Go 1.16: As of Go 1.16, the same functionality is now provided by package io or package os, and those implementations should be preferred in new code. See the specific function documentation for details. (SA1019)
`
|
1.0
|
Deprecation of package ioutil in Go 1.16 - the `ioutil` functions need to be changed to `io` packages
`
"io/ioutil" has been deprecated since Go 1.16: As of Go 1.16, the same functionality is now provided by package io or package os, and those implementations should be preferred in new code. See the specific function documentation for details. (SA1019)
`
|
process
|
deprecation of package ioutil in go the ioutil functions need to be changed to io packages io ioutil has been deprecated since go as of go the same functionality is now provided by package io or package os and those implementations should be preferred in new code see the specific function documentation for details
| 1
|
2,099
| 4,932,132,162
|
IssuesEvent
|
2016-11-28 12:38:49
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Document add_cloud_metadata processor
|
:Processors docs v5.1.0
|
We need to add documentation for the `add_cloud_metadata` (#2728) processor. I can write up the initial draft for this.
In support of new processors, we should probably look at reorganizing the [documentation](https://www.elastic.co/guide/en/beats/winlogbeat/5.0/configuration-processors.html) for processors. Other products have a page per processor.
|
1.0
|
Document add_cloud_metadata processor - We need to add documentation for the `add_cloud_metadata` (#2728) processor. I can write up the initial draft for this.
In support of new processors, we should probably look at reorganizing the [documentation](https://www.elastic.co/guide/en/beats/winlogbeat/5.0/configuration-processors.html) for processors. Other products have a page per processor.
|
process
|
document add cloud metadata processor we need to add documentation for the add cloud metadata processor i can write up the initial draft for this in support of new processors we should probably look at reorganizing the for processors other products have a page per processor
| 1
|
17,146
| 22,692,895,183
|
IssuesEvent
|
2022-07-05 00:15:47
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
k8sprocessor: enrich metadata if only pod name and pod namespace are available
|
comp: k8sprocessor
|
**Is your feature request related to a problem? Please describe.**
k8sprocessor enriches metadata only for `podUid` or `pod ip`. I have a case when only pod name and pod namespace are obtained by the receiver. In that case I'm unable to enrich metadata using this processor
**Describe the solution you'd like**
I would like to store metadata in additional key [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/7332cf3c77c24749d5089f83e046ee1b02351f7a/processor/k8sprocessor/kube/client.go#L368-L401), which would be build as `k8s.pod.name`.`k8s.namespace.name`.
I cannot use `host.name` as it can contains any hostname name.
I will prepare PR with proposal
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
1.0
|
k8sprocessor: enrich metadata if only pod name and pod namespace are available - **Is your feature request related to a problem? Please describe.**
k8sprocessor enriches metadata only for `podUid` or `pod ip`. I have a case when only pod name and pod namespace are obtained by the receiver. In that case I'm unable to enrich metadata using this processor
**Describe the solution you'd like**
I would like to store metadata in additional key [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/7332cf3c77c24749d5089f83e046ee1b02351f7a/processor/k8sprocessor/kube/client.go#L368-L401), which would be build as `k8s.pod.name`.`k8s.namespace.name`.
I cannot use `host.name` as it can contains any hostname name.
I will prepare PR with proposal
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
process
|
enrich metadata if only pod name and pod namespace are available is your feature request related to a problem please describe enriches metadata only for poduid or pod ip i have a case when only pod name and pod namespace are obtained by the receiver in that case i m unable to enrich metadata using this processor describe the solution you d like i would like to store metadata in additional key which would be build as pod name namespace name i cannot use host name as it can contains any hostname name i will prepare pr with proposal describe alternatives you ve considered n a additional context n a
| 1
|
62,140
| 12,198,029,927
|
IssuesEvent
|
2020-04-29 21:56:57
|
firebase/firebase-ios-sdk
|
https://api.github.com/repos/firebase/firebase-ios-sdk
|
closed
|
iOS 9 crashes after Xcode 11.4
|
Xcode 11.4 - 32 bit api: analytics
|
<!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### [REQUIRED] Step 1: Describe your environment
* Xcode version: 11.4
* Firebase SDK version: 6.9.0
* Firebase Component: Analytics
* Component version: 6.1.2
* Installation method: `CocoaPods`
### [REQUIRED] Step 2: Describe the problem
I haven't updated Firebase/Analytics SDK in a while, but yesterday I started gettings tons of new crashes in many parts of the SDK after updating my app. The updated app was built with Xcode 11.4, which I suspect is the issue because I didn't change anything related to the SDK itself or the usage of the SDK.
#### Steps to reproduce:
These crashes are mostly on iOS 9, although I see a few on iOS 10 as well. I'm having trouble reproing them, so all crash reports are from production.
#### Crash Traces
Here are some crash traces
This one has 26 crashes so far:
[com.scribbletogether.scribble_issue_crash_7e178704a3ad4bfeb8f00348edf9667b_DNE_0_v2.txt](https://github.com/firebase/firebase-ios-sdk/files/4409400/com.scribbletogether.scribble_issue_crash_7e178704a3ad4bfeb8f00348edf9667b_DNE_0_v2.txt)
Most of the rest of the crashes are one-off. Each crash trace is unique. There are ~90 issues each one with a new unique trace.
<img width="990" alt="Screen Shot 2020-03-31 at 9 44 22 AM" src="https://user-images.githubusercontent.com/1337793/78033679-b3304480-7334-11ea-9c54-ee11bf3187b2.png">
Here is an example of one:
[APMDatabase Crash](https://github.com/firebase/firebase-ios-sdk/files/4409424/com.scribbletogether.scribble_issue_crash_75b64869d7274f4e9c9fe38045e27dc4_DNE_0_v2.txt)
|
1.0
|
iOS 9 crashes after Xcode 11.4 - <!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### [REQUIRED] Step 1: Describe your environment
* Xcode version: 11.4
* Firebase SDK version: 6.9.0
* Firebase Component: Analytics
* Component version: 6.1.2
* Installation method: `CocoaPods`
### [REQUIRED] Step 2: Describe the problem
I haven't updated Firebase/Analytics SDK in a while, but yesterday I started gettings tons of new crashes in many parts of the SDK after updating my app. The updated app was built with Xcode 11.4, which I suspect is the issue because I didn't change anything related to the SDK itself or the usage of the SDK.
#### Steps to reproduce:
These crashes are mostly on iOS 9, although I see a few on iOS 10 as well. I'm having trouble reproing them, so all crash reports are from production.
#### Crash Traces
Here are some crash traces
This one has 26 crashes so far:
[com.scribbletogether.scribble_issue_crash_7e178704a3ad4bfeb8f00348edf9667b_DNE_0_v2.txt](https://github.com/firebase/firebase-ios-sdk/files/4409400/com.scribbletogether.scribble_issue_crash_7e178704a3ad4bfeb8f00348edf9667b_DNE_0_v2.txt)
Most of the rest of the crashes are one-off. Each crash trace is unique. There are ~90 issues each one with a new unique trace.
<img width="990" alt="Screen Shot 2020-03-31 at 9 44 22 AM" src="https://user-images.githubusercontent.com/1337793/78033679-b3304480-7334-11ea-9c54-ee11bf3187b2.png">
Here is an example of one:
[APMDatabase Crash](https://github.com/firebase/firebase-ios-sdk/files/4409424/com.scribbletogether.scribble_issue_crash_75b64869d7274f4e9c9fe38045e27dc4_DNE_0_v2.txt)
|
non_process
|
ios crashes after xcode do not delete validate template true template path github issue template bug report md step describe your environment xcode version firebase sdk version firebase component analytics component version installation method cocoapods step describe the problem i haven t updated firebase analytics sdk in a while but yesterday i started gettings tons of new crashes in many parts of the sdk after updating my app the updated app was built with xcode which i suspect is the issue because i didn t change anything related to the sdk itself or the usage of the sdk steps to reproduce these crashes are mostly on ios although i see a few on ios as well i m having trouble reproing them so all crash reports are from production crash traces here are some crash traces this one has crashes so far most of the rest of the crashes are one off each crash trace is unique there are issues each one with a new unique trace img width alt screen shot at am src here is an example of one
| 0
|
3,159
| 6,216,856,594
|
IssuesEvent
|
2017-07-08 08:43:48
|
bpython/bpython
|
https://api.github.com/repos/bpython/bpython
|
closed
|
Let bpython be run from not main thread
|
frontend-limitations requires-separate-process
|
I think this is just a problem with signal handlers. I'm trying playing with this now.
|
1.0
|
Let bpython be run from not main thread - I think this is just a problem with signal handlers. I'm trying playing with this now.
|
process
|
let bpython be run from not main thread i think this is just a problem with signal handlers i m trying playing with this now
| 1
|
132,334
| 18,268,455,004
|
IssuesEvent
|
2021-10-04 11:14:38
|
AnyVisionltd/anv-ui-components
|
https://api.github.com/repos/AnyVisionltd/anv-ui-components
|
closed
|
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz
|
security vulnerability
|
## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- meow-3.7.0.tgz
- normalize-package-data-2.5.0.tgz
- resolve-1.19.0.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AnyVisionltd/anv-ui-components/commit/3a525a622d8df0f64ce9b7b982c16bd1e878c6a9">3a525a622d8df0f64ce9b7b982c16bd1e878c6a9</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: path-parse - 1.0.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"path-parse","packageVersion":"1.0.6","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"node-sass:4.14.1;meow:3.7.0;normalize-package-data:2.5.0;resolve:1.19.0;path-parse:1.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"path-parse - 1.0.7"}],"baseBranches":["development"],"vulnerabilityIdentifier":"CVE-2021-23343","vulnerabilityDetails":"All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - ## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- meow-3.7.0.tgz
- normalize-package-data-2.5.0.tgz
- resolve-1.19.0.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AnyVisionltd/anv-ui-components/commit/3a525a622d8df0f64ce9b7b982c16bd1e878c6a9">3a525a622d8df0f64ce9b7b982c16bd1e878c6a9</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: path-parse - 1.0.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"path-parse","packageVersion":"1.0.6","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"node-sass:4.14.1;meow:3.7.0;normalize-package-data:2.5.0;resolve:1.19.0;path-parse:1.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"path-parse - 1.0.7"}],"baseBranches":["development"],"vulnerabilityIdentifier":"CVE-2021-23343","vulnerabilityDetails":"All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in path parse tgz cve high severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href dependency hierarchy node sass tgz root library meow tgz normalize package data tgz resolve tgz x path parse tgz vulnerable library found in head commit a href found in base branch development vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution path parse isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree node sass meow normalize package data resolve path parse isminimumfixversionavailable true minimumfixversion path parse basebranches vulnerabilityidentifier cve vulnerabilitydetails all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity vulnerabilityurl
| 0
|
1,378
| 3,941,734,835
|
IssuesEvent
|
2016-04-27 09:02:48
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Implement proper encoding name normalizaton
|
!IMPORTANT! AREA: server COMPLEXITY: easy SYSTEM: resource processing TYPE: enhancement
|
Currently we have pretty trivial encoding name normalization: https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/processing/encoding/charset.js#L50
But possible encoding aliases (aka labels) are defined in HTML spec: https://encoding.spec.whatwg.org/#names-and-labels
We need use this labels table instead.
|
1.0
|
Implement proper encoding name normalizaton - Currently we have pretty trivial encoding name normalization: https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/processing/encoding/charset.js#L50
But possible encoding aliases (aka labels) are defined in HTML spec: https://encoding.spec.whatwg.org/#names-and-labels
We need use this labels table instead.
|
process
|
implement proper encoding name normalizaton currently we have pretty trivial encoding name normalization but possible encoding aliases aka labels are defined in html spec we need use this labels table instead
| 1
|
19,810
| 3,784,909,223
|
IssuesEvent
|
2016-03-20 04:55:37
|
HubTurbo/HubTurbo
|
https://api.github.com/repos/HubTurbo/HubTurbo
|
closed
|
Cache dependencies on Travis to make building faster
|
aspect-testing priority.medium type.enhancement
|
Every build on Travis is downloading all the dependencies, which takes quite an amount of time.
We should try caching the dependencies to reduce the waiting time.
The Travis Doc mentioned a way to do this:
https://docs.travis-ci.com/user/languages/java#Caching
|
1.0
|
Cache dependencies on Travis to make building faster - Every build on Travis is downloading all the dependencies, which takes quite an amount of time.
We should try caching the dependencies to reduce the waiting time.
The Travis Doc mentioned a way to do this:
https://docs.travis-ci.com/user/languages/java#Caching
|
non_process
|
cache dependencies on travis to make building faster every build on travis is downloading all the dependencies which takes quite an amount of time we should try caching the dependencies to reduce the waiting time the travis doc mentioned a way to do this
| 0
|
54,863
| 3,071,454,741
|
IssuesEvent
|
2015-08-19 12:14:29
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
closed
|
Появление белого фона в списке пользователей (r500 beta28)
|
bug Component-UI imported Priority-Medium
|
_From [dimitrij...@gmail.com](https://code.google.com/u/117085084104156933070/) on October 01, 2010 18:17:14_
При наведении курсора мыши на описание пользователя, появляется белый фон под описание. Белый фон не появляется, если все описание не влазит в ширину колонки (тогда появляется всплывающая подсказка с описанием без обрезания). Спустя некоторое время белый фон пропадает.
**Attachment:** [screenshot-255.png](http://code.google.com/p/flylinkdc/issues/detail?id=190)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=190_
|
1.0
|
Появление белого фона в списке пользователей (r500 beta28) - _From [dimitrij...@gmail.com](https://code.google.com/u/117085084104156933070/) on October 01, 2010 18:17:14_
При наведении курсора мыши на описание пользователя, появляется белый фон под описание. Белый фон не появляется, если все описание не влазит в ширину колонки (тогда появляется всплывающая подсказка с описанием без обрезания). Спустя некоторое время белый фон пропадает.
**Attachment:** [screenshot-255.png](http://code.google.com/p/flylinkdc/issues/detail?id=190)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=190_
|
non_process
|
появление белого фона в списке пользователей from on october при наведении курсора мыши на описание пользователя появляется белый фон под описание белый фон не появляется если все описание не влазит в ширину колонки тогда появляется всплывающая подсказка с описанием без обрезания спустя некоторое время белый фон пропадает attachment original issue
| 0
|
6,524
| 9,612,280,912
|
IssuesEvent
|
2019-05-13 08:32:51
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
in IE11, my content="IE=edge" not working.
|
AREA: server SYSTEM: resource processing TYPE: bug
|
<!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
the testcafe add their tags before my '<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />', so IE 11 can not run as IE 11 mode.
### What is the Current behavior?
the page rendered as IE 7 in IE 11.
### What is the Expected behavior?
the page rendered as IE 11 in IE 11.
### What is your web application and your TestCafe test code?
my html:
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta charset="utf-8" />
<title></title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
... ...
testcafe code:
import { Selector } from 'testcafe';
fixture `New Fixture`
.page `http://127.0.0.1:8080/framework`;
test('New Test', async t => {
await t
.typeText(Selector('#username'), 'admin')
.pressKey('tab')
.typeText(Selector('#password'), 'admin')
.pressKey('enter');
});
Your website URL (or attach your complete example):
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: <!-- run `testcafe -v` -->
* node.js version: <!-- run `node -v` -->
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
1.0
|
in IE11, my content="IE=edge" not working. - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
the testcafe add their tags before my '<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />', so IE 11 can not run as IE 11 mode.
### What is the Current behavior?
the page rendered as IE 7 in IE 11.
### What is the Expected behavior?
the page rendered as IE 11 in IE 11.
### What is your web application and your TestCafe test code?
my html:
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta charset="utf-8" />
<title></title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
... ...
testcafe code:
import { Selector } from 'testcafe';
fixture `New Fixture`
.page `http://127.0.0.1:8080/framework`;
test('New Test', async t => {
await t
.typeText(Selector('#username'), 'admin')
.pressKey('tab')
.typeText(Selector('#password'), 'admin')
.pressKey('enter');
});
Your website URL (or attach your complete example):
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: <!-- run `testcafe -v` -->
* node.js version: <!-- run `node -v` -->
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
process
|
in my content ie edge not working if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario the testcafe add their tags before my so ie can not run as ie mode what is the current behavior the page rendered as ie in ie what is the expected behavior the page rendered as ie in ie what is your web application and your testcafe test code my html testcafe code import selector from testcafe fixture new fixture page test new test async t await t typetext selector username admin presskey tab typetext selector password admin presskey enter your website url or attach your complete example your complete test code or attach your test files js your complete test report screenshots steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments browser name and version platform and version other
| 1
|
155,134
| 12,239,511,118
|
IssuesEvent
|
2020-05-04 21:50:49
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Panic after provisioning azure cluster with cloud provider set
|
[zube]: To Test
|
Version: Master
K8s: v1.13.5
Steps:
1. Provision a Azure cluster with cloud provider setup.
Observed a Panic few minutes after provisioning the cluster
```
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [network] Setting up network plugin: canal
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-network-plugin
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up kube-dns
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-kube-dns-addon to Kubernetes
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-kube-dns-addon to Kubernetes
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-kube-dns-addon
W0610 23:36:15.286085 7 reflector.go:270] github.com/rancher/norman/controller/generic_controller.go:175: watch of *v3.Node ended with: too old resource version: 407655 (418924)
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] kube-dns deployed successfully
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [dns] DNS provider kube-dns deployed successfully
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up Metrics Server
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-metrics-addon
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Metrics Server deployed successfully
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [ingress] Setting up nginx ingress controller
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-ingress-controller
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [ingress] ingress controller nginx deployed successfully
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up user addons
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [addons] no user addons defined
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: Finished building Kubernetes cluster successfully
2019/06/10 23:36:42 [INFO] kontainerdriver rancherkubernetesengine stopped
2019/06/10 23:36:47 [INFO] kontainerdriver rancherkubernetesengine listening on address 127.0.0.1:10097
2019/06/10 23:36:47 [INFO] kontainerdriver rancherkubernetesengine stopped
2019/06/10 23:36:47 [INFO] Provisioned cluster [c-8j7n7]
I0610 23:36:47.289254 7 http.go:110] HTTP2 has been explicitly disabled
2019/06/10 23:36:47 [INFO] Creating user for principal system://c-8j7n7
2019/06/10 23:36:47 [INFO] Creating globalRoleBindings for u-ci5cs2djms
2019/06/10 23:36:47 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding grb-wg9wd
2019/06/10 23:36:47 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding grb-wg9wd for user u-ci5cs2djms with role cattle-globalrole-user
2019/06/10 23:36:47 [INFO] Registering project network policy
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy cluster handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy project handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy namespace handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy serviceaccount handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy template handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] Creating token for user u-ci5cs2djms
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster c-8j7n7 for subject u-ci5cs2djms
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] Registering monitoring for cluster "c-8j7n7"
2019/06/10 23:36:47 [INFO] Registering istio for cluster "c-8j7n7"
2019/06/10 23:36:47 [INFO] Creating CRD clusterauthtokens.cluster.cattle.io
2019/06/10 23:37:02 [ERROR] cluster.management.cattle.io "c-8j7n7" not found
panic: creating CRD store etcdserver: request timed out
goroutine 10803641 [running]:
github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs.func1(0xc00daa9340, 0xc00daa9360, 0x2, 0x2, 0xc0044345a0, 0x6ae85c0, 0x420a300, 0xc022f849c0, 0x3abb40c, 0x4)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x2d9
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:50 +0xce
```
The setup had 4 other clusters.
|
1.0
|
Panic after provisioning azure cluster with cloud provider set - Version: Master
K8s: v1.13.5
Steps:
1. Provision a Azure cluster with cloud provider setup.
Observed a Panic few minutes after provisioning the cluster
```
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [network] Setting up network plugin: canal
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
2019/06/10 23:35:53 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-network-plugin
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up kube-dns
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-kube-dns-addon to Kubernetes
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-kube-dns-addon to Kubernetes
2019/06/10 23:36:03 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-kube-dns-addon
W0610 23:36:15.286085 7 reflector.go:270] github.com/rancher/norman/controller/generic_controller.go:175: watch of *v3.Node ended with: too old resource version: 407655 (418924)
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] kube-dns deployed successfully
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [dns] DNS provider kube-dns deployed successfully
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up Metrics Server
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
2019/06/10 23:36:21 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-metrics-addon
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Metrics Server deployed successfully
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [ingress] Setting up nginx ingress controller
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
2019/06/10 23:36:31 [INFO] cluster [c-8j7n7] provisioning: [addons] Executing deploy job rke-ingress-controller
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [ingress] ingress controller nginx deployed successfully
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [addons] Setting up user addons
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: [addons] no user addons defined
2019/06/10 23:36:41 [INFO] cluster [c-8j7n7] provisioning: Finished building Kubernetes cluster successfully
2019/06/10 23:36:42 [INFO] kontainerdriver rancherkubernetesengine stopped
2019/06/10 23:36:47 [INFO] kontainerdriver rancherkubernetesengine listening on address 127.0.0.1:10097
2019/06/10 23:36:47 [INFO] kontainerdriver rancherkubernetesengine stopped
2019/06/10 23:36:47 [INFO] Provisioned cluster [c-8j7n7]
I0610 23:36:47.289254 7 http.go:110] HTTP2 has been explicitly disabled
2019/06/10 23:36:47 [INFO] Creating user for principal system://c-8j7n7
2019/06/10 23:36:47 [INFO] Creating globalRoleBindings for u-ci5cs2djms
2019/06/10 23:36:47 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding grb-wg9wd
2019/06/10 23:36:47 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding grb-wg9wd for user u-ci5cs2djms with role cattle-globalrole-user
2019/06/10 23:36:47 [INFO] Registering project network policy
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy cluster handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy project handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy namespace handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy serviceaccount handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] registering podsecuritypolicy template handler for cluster c-8j7n7
2019/06/10 23:36:47 [INFO] Creating token for user u-ci5cs2djms
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster c-8j7n7 for subject u-ci5cs2djms
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-ci5cs2djms with role cluster-owner in namespace
2019/06/10 23:36:47 [INFO] Registering monitoring for cluster "c-8j7n7"
2019/06/10 23:36:47 [INFO] Registering istio for cluster "c-8j7n7"
2019/06/10 23:36:47 [INFO] Creating CRD clusterauthtokens.cluster.cattle.io
2019/06/10 23:37:02 [ERROR] cluster.management.cattle.io "c-8j7n7" not found
panic: creating CRD store etcdserver: request timed out
goroutine 10803641 [running]:
github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs.func1(0xc00daa9340, 0xc00daa9360, 0x2, 0x2, 0xc0044345a0, 0x6ae85c0, 0x420a300, 0xc022f849c0, 0x3abb40c, 0x4)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x2d9
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:50 +0xce
```
The setup had 4 other clusters.
|
non_process
|
panic after provisioning azure cluster with cloud provider set version master steps provision a azure cluster with cloud provider setup observed a panic few minutes after provisioning the cluster cluster provisioning setting up network plugin canal cluster provisioning saving configmap for addon rke network plugin to kubernetes cluster provisioning successfully saved configmap for addon rke network plugin to kubernetes cluster provisioning executing deploy job rke network plugin cluster provisioning setting up kube dns cluster provisioning saving configmap for addon rke kube dns addon to kubernetes cluster provisioning successfully saved configmap for addon rke kube dns addon to kubernetes cluster provisioning executing deploy job rke kube dns addon reflector go github com rancher norman controller generic controller go watch of node ended with too old resource version cluster provisioning kube dns deployed successfully cluster provisioning dns provider kube dns deployed successfully cluster provisioning setting up metrics server cluster provisioning saving configmap for addon rke metrics addon to kubernetes cluster provisioning successfully saved configmap for addon rke metrics addon to kubernetes cluster provisioning executing deploy job rke metrics addon cluster provisioning metrics server deployed successfully cluster provisioning setting up nginx ingress controller cluster provisioning saving configmap for addon rke ingress controller to kubernetes cluster provisioning successfully saved configmap for addon rke ingress controller to kubernetes cluster provisioning executing deploy job rke ingress controller cluster provisioning ingress controller nginx deployed successfully cluster provisioning setting up user addons cluster provisioning no user addons defined cluster provisioning finished building kubernetes cluster successfully kontainerdriver rancherkubernetesengine stopped kontainerdriver rancherkubernetesengine listening on address kontainerdriver rancherkubernetesengine stopped provisioned cluster http go has been explicitly disabled creating user for principal system c creating globalrolebindings for u creating new globalrolebinding for globalrolebinding grb creating clusterrolebinding for globalrolebinding grb for user u with role cattle globalrole user registering project network policy registering podsecuritypolicy cluster handler for cluster c registering podsecuritypolicy project handler for cluster c registering podsecuritypolicy namespace handler for cluster c registering podsecuritypolicy serviceaccount handler for cluster c registering podsecuritypolicy template handler for cluster c creating token for user u creating clusterrolebinding for membership in cluster c for subject u creating rolebinding for subject u with role cluster owner in namespace creating rolebinding for subject u with role cluster owner in namespace creating rolebinding for subject u with role cluster owner in namespace registering monitoring for cluster c registering istio for cluster c creating crd clusterauthtokens cluster cattle io cluster management cattle io c not found panic creating crd store etcdserver request timed out goroutine github com rancher rancher vendor github com rancher norman store crd factory batchcreatecrds go src github com rancher rancher vendor github com rancher norman store crd init go created by github com rancher rancher vendor github com rancher norman store crd factory batchcreatecrds go src github com rancher rancher vendor github com rancher norman store crd init go the setup had other clusters
| 0
|
21,701
| 30,195,876,494
|
IssuesEvent
|
2023-07-04 20:57:22
|
xpsi-group/xpsi
|
https://api.github.com/repos/xpsi-group/xpsi
|
opened
|
Tight layout not working with corner plots
|
postprocessing
|
The tight layout is not working with large corner plots for the python3 version of X-PSI. The reason for that is not clear to me. This means that for large corner plots vertical white spaces appear between the subplots unless rotating the x-axis tick labels by 90 degrees using `axis_tick_x_rotation` option. An example of this is the first corner plot in the Post-processing tutorial: https://xpsi-group.github.io/xpsi/Post-processing.html. This did not happen with the older GetDist and X-PSI versions used with Python2.
I have tried to fix the problem by adding `plt.tight_layout()` after calling the CornerPlotter, but that does not seem to help (it just gives a warning that a tight layout is not possible). In addition, passing `tight_layout=True` keyword argument for GetDist does not affect either. In this case, GetDist documentation (https://getdist.readthedocs.io/en/latest/plots.html) suggests trying `constrained_layout=True `argument instead. That seems to remove vertical white spaces, but unfortunately it creates horizontal white spaces between the subplots instead.
|
1.0
|
Tight layout not working with corner plots - The tight layout is not working with large corner plots for the python3 version of X-PSI. The reason for that is not clear to me. This means that for large corner plots vertical white spaces appear between the subplots unless rotating the x-axis tick labels by 90 degrees using `axis_tick_x_rotation` option. An example of this is the first corner plot in the Post-processing tutorial: https://xpsi-group.github.io/xpsi/Post-processing.html. This did not happen with the older GetDist and X-PSI versions used with Python2.
I have tried to fix the problem by adding `plt.tight_layout()` after calling the CornerPlotter, but that does not seem to help (it just gives a warning that a tight layout is not possible). In addition, passing `tight_layout=True` keyword argument for GetDist does not affect either. In this case, GetDist documentation (https://getdist.readthedocs.io/en/latest/plots.html) suggests trying `constrained_layout=True `argument instead. That seems to remove vertical white spaces, but unfortunately it creates horizontal white spaces between the subplots instead.
|
process
|
tight layout not working with corner plots the tight layout is not working with large corner plots for the version of x psi the reason for that is not clear to me this means that for large corner plots vertical white spaces appear between the subplots unless rotating the x axis tick labels by degrees using axis tick x rotation option an example of this is the first corner plot in the post processing tutorial this did not happen with the older getdist and x psi versions used with i have tried to fix the problem by adding plt tight layout after calling the cornerplotter but that does not seem to help it just gives a warning that a tight layout is not possible in addition passing tight layout true keyword argument for getdist does not affect either in this case getdist documentation suggests trying constrained layout true argument instead that seems to remove vertical white spaces but unfortunately it creates horizontal white spaces between the subplots instead
| 1
|
9,629
| 12,576,526,124
|
IssuesEvent
|
2020-06-09 08:02:49
|
kubeflow/manifests
|
https://api.github.com/repos/kubeflow/manifests
|
closed
|
Remove Ambassador annotations from Jupyter web app
|
area/jupyter kind/feature kind/process lifecycle/stale priority/p1
|
We can get rid of the Ambassador annotations on the jupyter web app
https://github.com/kubeflow/manifests/blob/67eabbfd907dbb212b2fa39baeddb3a4b7e9e743/jupyter/jupyter-web-app/base/service.yaml#L5
because we no longer use Ambassador to route traffic.
Ref: kubeflow/kubeflow#3833
|
1.0
|
Remove Ambassador annotations from Jupyter web app - We can get rid of the Ambassador annotations on the jupyter web app
https://github.com/kubeflow/manifests/blob/67eabbfd907dbb212b2fa39baeddb3a4b7e9e743/jupyter/jupyter-web-app/base/service.yaml#L5
because we no longer use Ambassador to route traffic.
Ref: kubeflow/kubeflow#3833
|
process
|
remove ambassador annotations from jupyter web app we can get rid of the ambassador annotations on the jupyter web app because we no longer use ambassador to route traffic ref kubeflow kubeflow
| 1
|
775,551
| 27,234,814,780
|
IssuesEvent
|
2023-02-21 15:36:52
|
ascheid/itsg33-pbmm-issue-gen
|
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
|
closed
|
CM-6 CONFIGURATION SETTINGS
|
Priority: P1
|
(A) The organization establishes and documents configuration settings for information technology products employed within the information system using [Assignment: organization-defined security configuration checklists] that reflect the most restrictive mode consistent with operational requirements.
(B) The organization implements the configuration settings.
(C) The organization identifies, documents, and approves any deviations from established configuration settings for [Assignment: organization-defined information system components] based on [Assignment: organization-defined operational requirements].
(D) The organization monitors and controls changes to the configuration settings in accordance with organizational policies and procedures.
|
1.0
|
CM-6 CONFIGURATION SETTINGS - (A) The organization establishes and documents configuration settings for information technology products employed within the information system using [Assignment: organization-defined security configuration checklists] that reflect the most restrictive mode consistent with operational requirements.
(B) The organization implements the configuration settings.
(C) The organization identifies, documents, and approves any deviations from established configuration settings for [Assignment: organization-defined information system components] based on [Assignment: organization-defined operational requirements].
(D) The organization monitors and controls changes to the configuration settings in accordance with organizational policies and procedures.
|
non_process
|
cm configuration settings a the organization establishes and documents configuration settings for information technology products employed within the information system using that reflect the most restrictive mode consistent with operational requirements b the organization implements the configuration settings c the organization identifies documents and approves any deviations from established configuration settings for based on d the organization monitors and controls changes to the configuration settings in accordance with organizational policies and procedures
| 0
|
13,561
| 16,103,845,961
|
IssuesEvent
|
2021-04-27 12:50:26
|
thesofproject/sof
|
https://api.github.com/repos/thesofproject/sof
|
opened
|
[BUG] Post Processing XRUN when playing multiple times on same topology
|
Post Processing TGL bug
|
**Describe the bug**
In any Post Processing test when trying to play 2 times or more on one topology (new streams are created, topology stays intact) SteamStall caused by XRUN occurs.
**Topology**
```
+------------------------------------+
+------+ | +---+ +-------+ +---+ +-------+ |
| Host |----->|Buf|->|GenProc|->|Buf|->|SSP Dai|---+
+------+ | +---+ +-------+ +---+ +-------+ | |
+------------------------------------+ |
|
+------------------+ |
+------+ | +---+ +-------+ | |
| Host |<-----|Buf|<-|SSP Dai|<--------------------+
+------+ | +---+ +-------+ |
+------------------+
```
**To Reproduce**
Any python test from groups:
25_00_TestGenericProcessorSimplePlb
25_02_TestGenericProcessorCompMultiCorePlb
25_06_TestGenericProcessorPplMultiCorePlb
with parameters:
--playback_iterations=2
**Reproduction Rate**
100%
**Environment**
1) Name of the platform(s) on which the bug is observed.
* Platform: TGL B0 RVP
2) Firmware branch name and commit
* Branch: main
* Hash: between eb459078f3023f6762428f38976547576eeae59f (bug free) and 2e6cafa02fca352c9f4a085d3c44925e61a19253 (bugged)
3) Python tests branch name and commit
* Branch: master
* Hash: 825533f4e0d3ff5e2425abb50e9785916d7d1044
**Logs**
[25_00_TestGenericProcessorSimplePlb_StreamStall.zip](https://github.com/thesofproject/sof/files/6384533/25_00_TestGenericProcessorSimplePlb_StreamStall.zip)
|
1.0
|
[BUG] Post Processing XRUN when playing multiple times on same topology - **Describe the bug**
In any Post Processing test when trying to play 2 times or more on one topology (new streams are created, topology stays intact) SteamStall caused by XRUN occurs.
**Topology**
```
+------------------------------------+
+------+ | +---+ +-------+ +---+ +-------+ |
| Host |----->|Buf|->|GenProc|->|Buf|->|SSP Dai|---+
+------+ | +---+ +-------+ +---+ +-------+ | |
+------------------------------------+ |
|
+------------------+ |
+------+ | +---+ +-------+ | |
| Host |<-----|Buf|<-|SSP Dai|<--------------------+
+------+ | +---+ +-------+ |
+------------------+
```
**To Reproduce**
Any python test from groups:
25_00_TestGenericProcessorSimplePlb
25_02_TestGenericProcessorCompMultiCorePlb
25_06_TestGenericProcessorPplMultiCorePlb
with parameters:
--playback_iterations=2
**Reproduction Rate**
100%
**Environment**
1) Name of the platform(s) on which the bug is observed.
* Platform: TGL B0 RVP
2) Firmware branch name and commit
* Branch: main
* Hash: between eb459078f3023f6762428f38976547576eeae59f (bug free) and 2e6cafa02fca352c9f4a085d3c44925e61a19253 (bugged)
3) Python tests branch name and commit
* Branch: master
* Hash: 825533f4e0d3ff5e2425abb50e9785916d7d1044
**Logs**
[25_00_TestGenericProcessorSimplePlb_StreamStall.zip](https://github.com/thesofproject/sof/files/6384533/25_00_TestGenericProcessorSimplePlb_StreamStall.zip)
|
process
|
post processing xrun when playing multiple times on same topology describe the bug in any post processing test when trying to play times or more on one topology new streams are created topology stays intact steamstall caused by xrun occurs topology host buf genproc buf ssp dai host buf ssp dai to reproduce any python test from groups testgenericprocessorsimpleplb testgenericprocessorcompmulticoreplb testgenericprocessorpplmulticoreplb with parameters playback iterations reproduction rate environment name of the platform s on which the bug is observed platform tgl rvp firmware branch name and commit branch main hash between bug free and bugged python tests branch name and commit branch master hash logs
| 1
|
205,885
| 16,012,852,668
|
IssuesEvent
|
2021-04-20 12:52:03
|
fluid-cloudnative/fluid
|
https://api.github.com/repos/fluid-cloudnative/fluid
|
closed
|
sdk 是否能够提供易用的查询资源状态的接口?
|
documentation features
|
**What feature you'd like to add:**
sdk 能够提供一个易用的接口,通过这个接口可以查询已有的不同资源的相关属性。例如 kubectl get ${resource} 时得到的信息。
**Why is this feature needed:**
当使用 go sdk 创建资源后,我需要知道资源是否处于可用状态。例如创建 dataset 后,我需要确定 phase 为 bound 时才能使用。但现有的 Get 接口比较抽象,希望能够针对不同资源提供易用的 Get。
|
1.0
|
sdk 是否能够提供易用的查询资源状态的接口? - **What feature you'd like to add:**
sdk 能够提供一个易用的接口,通过这个接口可以查询已有的不同资源的相关属性。例如 kubectl get ${resource} 时得到的信息。
**Why is this feature needed:**
当使用 go sdk 创建资源后,我需要知道资源是否处于可用状态。例如创建 dataset 后,我需要确定 phase 为 bound 时才能使用。但现有的 Get 接口比较抽象,希望能够针对不同资源提供易用的 Get。
|
non_process
|
sdk 是否能够提供易用的查询资源状态的接口? what feature you d like to add sdk 能够提供一个易用的接口,通过这个接口可以查询已有的不同资源的相关属性。例如 kubectl get resource 时得到的信息。 why is this feature needed 当使用 go sdk 创建资源后,我需要知道资源是否处于可用状态。例如创建 dataset 后,我需要确定 phase 为 bound 时才能使用。但现有的 get 接口比较抽象,希望能够针对不同资源提供易用的 get。
| 0
|
131,277
| 18,263,529,785
|
IssuesEvent
|
2021-10-04 04:40:02
|
phetsims/fourier-making-waves
|
https://api.github.com/repos/phetsims/fourier-making-waves
|
closed
|
Waveform changes to 'custom' when you open any amplitude keypad.
|
design:general priority:2-high
|
For phetsims/qa#711, and related to https://github.com/phetsims/fourier-making-waves/issues/200 ...
As soon as you open the keypad for any amplitude, the Waveform combo box switches to 'custom'. You don't actually have to change the value.
For example:
1. Go to the Discrete screen
2. Select 'wave packet' from Waveform combo box
3. Click in the A<sub>1</sub> NumberDisplay
4. Note that the Waveform combo box immediately changes to 'custom'
It would probably be more "polite" to only change to "custom" if the user enters a (new?) value for the amplitude. But that's a heck of a lot more work, which is why I problably thought this was good enough. Or maybe I didn't think of this case. Anway...
@arouinfar Do you think we need to change anything here? I'm OK leaving this as is, if you are. But I wanted to point this out, so that we're making a conscious decision.
|
1.0
|
Waveform changes to 'custom' when you open any amplitude keypad. - For phetsims/qa#711, and related to https://github.com/phetsims/fourier-making-waves/issues/200 ...
As soon as you open the keypad for any amplitude, the Waveform combo box switches to 'custom'. You don't actually have to change the value.
For example:
1. Go to the Discrete screen
2. Select 'wave packet' from Waveform combo box
3. Click in the A<sub>1</sub> NumberDisplay
4. Note that the Waveform combo box immediately changes to 'custom'
It would probably be more "polite" to only change to "custom" if the user enters a (new?) value for the amplitude. But that's a heck of a lot more work, which is why I problably thought this was good enough. Or maybe I didn't think of this case. Anway...
@arouinfar Do you think we need to change anything here? I'm OK leaving this as is, if you are. But I wanted to point this out, so that we're making a conscious decision.
|
non_process
|
waveform changes to custom when you open any amplitude keypad for phetsims qa and related to as soon as you open the keypad for any amplitude the waveform combo box switches to custom you don t actually have to change the value for example go to the discrete screen select wave packet from waveform combo box click in the a numberdisplay note that the waveform combo box immediately changes to custom it would probably be more polite to only change to custom if the user enters a new value for the amplitude but that s a heck of a lot more work which is why i problably thought this was good enough or maybe i didn t think of this case anway arouinfar do you think we need to change anything here i m ok leaving this as is if you are but i wanted to point this out so that we re making a conscious decision
| 0
|
7,480
| 10,571,981,776
|
IssuesEvent
|
2019-10-07 08:33:50
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
opened
|
add computer vision module support with OpenCV
|
processor
|
the computer vision module provide users with an API for image records manipulation
and computer vision algorithm :
https://www.learnopencv.com/histogram-of-oriented-gradients/
|
1.0
|
add computer vision module support with OpenCV - the computer vision module provide users with an API for image records manipulation
and computer vision algorithm :
https://www.learnopencv.com/histogram-of-oriented-gradients/
|
process
|
add computer vision module support with opencv the computer vision module provide users with an api for image records manipulation and computer vision algorithm
| 1
|
17,334
| 12,302,821,320
|
IssuesEvent
|
2020-05-11 17:37:27
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
reopened
|
TestWrappers get built to wrong location
|
area-Infrastructure-coreclr untriaged
|
After a coreclr test build, the following directories exist (Windows, x64, Checked):
```
f:\gh\runtime2\artifacts\tests>dir /b f:\gh\runtime2\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.SIMD
JIT.SIMD.XUnitWrapper.cs
JIT.SIMD.XUnitWrapper.csproj
f:\gh\runtime2\artifacts\tests>dir /b f:\gh\runtime2\artifacts\tests\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.SIMD\JIT.SIMD.XUnitWrapper
JIT.SIMD.XUnitWrapper.AssemblyInfo.cs
JIT.SIMD.XUnitWrapper.AssemblyInfoInputs.cache
JIT.SIMD.XUnitWrapper.assets.cache
JIT.SIMD.XUnitWrapper.csproj.CopyComplete
JIT.SIMD.XUnitWrapper.csproj.FileListAbsolute.txt
JIT.SIMD.XUnitWrapper.csproj.nuget.cache
JIT.SIMD.XUnitWrapper.csproj.nuget.dgspec.json
JIT.SIMD.XUnitWrapper.csproj.nuget.g.props
JIT.SIMD.XUnitWrapper.csproj.nuget.g.targets
JIT.SIMD.XUnitWrapper.dll
project.assets.json
```
The built version of the wrappers are getting the wrong directory.
@dotnet/runtime-infrastructure
|
1.0
|
TestWrappers get built to wrong location - After a coreclr test build, the following directories exist (Windows, x64, Checked):
```
f:\gh\runtime2\artifacts\tests>dir /b f:\gh\runtime2\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.SIMD
JIT.SIMD.XUnitWrapper.cs
JIT.SIMD.XUnitWrapper.csproj
f:\gh\runtime2\artifacts\tests>dir /b f:\gh\runtime2\artifacts\tests\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.SIMD\JIT.SIMD.XUnitWrapper
JIT.SIMD.XUnitWrapper.AssemblyInfo.cs
JIT.SIMD.XUnitWrapper.AssemblyInfoInputs.cache
JIT.SIMD.XUnitWrapper.assets.cache
JIT.SIMD.XUnitWrapper.csproj.CopyComplete
JIT.SIMD.XUnitWrapper.csproj.FileListAbsolute.txt
JIT.SIMD.XUnitWrapper.csproj.nuget.cache
JIT.SIMD.XUnitWrapper.csproj.nuget.dgspec.json
JIT.SIMD.XUnitWrapper.csproj.nuget.g.props
JIT.SIMD.XUnitWrapper.csproj.nuget.g.targets
JIT.SIMD.XUnitWrapper.dll
project.assets.json
```
The built version of the wrappers are getting the wrong directory.
@dotnet/runtime-infrastructure
|
non_process
|
testwrappers get built to wrong location after a coreclr test build the following directories exist windows checked f gh artifacts tests dir b f gh artifacts tests coreclr windows nt checked testwrappers jit simd jit simd xunitwrapper cs jit simd xunitwrapper csproj f gh artifacts tests dir b f gh artifacts tests artifacts tests coreclr windows nt checked testwrappers jit simd jit simd xunitwrapper jit simd xunitwrapper assemblyinfo cs jit simd xunitwrapper assemblyinfoinputs cache jit simd xunitwrapper assets cache jit simd xunitwrapper csproj copycomplete jit simd xunitwrapper csproj filelistabsolute txt jit simd xunitwrapper csproj nuget cache jit simd xunitwrapper csproj nuget dgspec json jit simd xunitwrapper csproj nuget g props jit simd xunitwrapper csproj nuget g targets jit simd xunitwrapper dll project assets json the built version of the wrappers are getting the wrong directory dotnet runtime infrastructure
| 0
|
7,128
| 10,276,342,225
|
IssuesEvent
|
2019-08-24 16:28:06
|
xethya/framework
|
https://api.github.com/repos/xethya/framework
|
opened
|
[publish process] Publish packages as soon as they're merged into master
|
publish process
|
We could use something like this:
https://github.com/counterfactual/monorepo/blob/master/.circleci/config.yml#L116-L136
https://github.com/counterfactual/monorepo/blob/master/package.json#L24
|
1.0
|
[publish process] Publish packages as soon as they're merged into master - We could use something like this:
https://github.com/counterfactual/monorepo/blob/master/.circleci/config.yml#L116-L136
https://github.com/counterfactual/monorepo/blob/master/package.json#L24
|
process
|
publish packages as soon as they re merged into master we could use something like this
| 1
|
71,322
| 3,355,475,614
|
IssuesEvent
|
2015-11-18 16:34:05
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
reopened
|
Stair-stepping in pods going from Pending to Running
|
priority/P1 team/control-plane team/CSI
|
We'd like to get this to a more reproducable state with e2e-generated load and some consensus on what a "correct" cluster looks like. In the meantime, here's what we've been seeing so far.
Provision a cluster in AWS using terraform + ansible
- used https://github.com/Samsung-AG/kraken with terraform.tfvars
```
node_count = 100
aws_apiserver_type = "m4.2xlarge"
aws_master_type = "m4.2xlarge"
aws_etcd_type = "m4.2xlarge"
aws_special_node_type = "m3.xlarge"
aws_node_type = "m3.medium"
kubernetes_binaries_uri = "https://s3-us-west-2.amazonaws.com/sundry-automata/hyperkube/v1.1.0-alpha.1.1007%2Bcad5f03311afff-dirty"
```
- master is nginx + controller-manager + scheduler
- apiserver talks directly to etcd node, no local etcd proxy used
- special node is reserved for expensive services via label selectors
Setup a "pause" replication controller that creates gcr.io/google_containers/pause:go pods
- used https://github.com/Samsung-AG/kraken-services/blob/master/pause/controller.yaml
Manually scale the number of pause pods to 2000.
```kubectl scale --replicas=10 rc pause```
Replication controller creates all pod resources in Pending by the time the `kubectl` call has returned
Total time for all pods to go to Running is ~6m. Initial burst of >1000 pods goes from Pending to Running within the first ~2m.
Then periodic stairsteps of total number of pods in Running as the scheduler binds pods in chunks
- using prometheus to gather metrics from all kubernetes components, and etcd
- using promdash dashboard: https://github.com/Samsung-AG/kraken-services/blob/master/prometheus/build/promdash/pod-scaling.json
- no observable exhaustion of resources, just period peaks which correspond to pods moving from Pending to Running
There appears to be a ~30s interval during which the scheduler goes quiet and then kicks back to life
The thinking is... there may be some rate limiters and magic constants tuned to hold back a thundering herd. We'd like to remove those and see if we can find where that moves the raw resource bottleneck (eg: compute, network, memory, etc).
Verified with no other services running via kubectl polling
```
$ k get pods | grep -v pause
NAME READY STATUS RESTARTS AGE
$ k get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.100.0.1 443/TCP
```
Scale up to 2000 pods then poll every 10s
```
#!/bin/bash
date -u;
time kubectl scale --replicas=2000 rc pause;
date -u;
while true; do
echo $(date -u) : $(kubectl get pods --selector=k8s-app=pause | grep -ic running);
sleep 10;
done
date -u
```
Sample output
```
$ ./scale-up.sh
Fri Sep 18 22:05:37 UTC 2015
scaled
real 0m5.213s
user 0m0.019s
sys 0m0.009s
Fri Sep 18 22:05:42 UTC 2015
Fri Sep 18 22:05:42 UTC 2015 : 302
Fri Sep 18 22:05:53 UTC 2015 : 1117
Fri Sep 18 22:06:04 UTC 2015 : 1118
Fri Sep 18 22:06:14 UTC 2015 : 1118
Fri Sep 18 22:06:25 UTC 2015 : 1118
Fri Sep 18 22:06:35 UTC 2015 : 1118
Fri Sep 18 22:06:46 UTC 2015 : 1118
Fri Sep 18 22:06:57 UTC 2015 : 1275
Fri Sep 18 22:07:07 UTC 2015 : 1275
Fri Sep 18 22:07:18 UTC 2015 : 1275
Fri Sep 18 22:07:29 UTC 2015 : 1275
Fri Sep 18 22:07:39 UTC 2015 : 1275
Fri Sep 18 22:07:50 UTC 2015 : 1431
Fri Sep 18 22:08:01 UTC 2015 : 1431
Fri Sep 18 22:08:11 UTC 2015 : 1431
Fri Sep 18 22:08:22 UTC 2015 : 1431
Fri Sep 18 22:08:33 UTC 2015 : 1579
Fri Sep 18 22:08:43 UTC 2015 : 1588
Fri Sep 18 22:08:54 UTC 2015 : 1588
Fri Sep 18 22:09:05 UTC 2015 : 1588
Fri Sep 18 22:09:15 UTC 2015 : 1588
Fri Sep 18 22:09:26 UTC 2015 : 1738
Fri Sep 18 22:09:39 UTC 2015 : 1738
Fri Sep 18 22:09:50 UTC 2015 : 1738
Fri Sep 18 22:10:01 UTC 2015 : 1738
Fri Sep 18 22:10:11 UTC 2015 : 1777
Fri Sep 18 22:10:22 UTC 2015 : 1846
Fri Sep 18 22:10:33 UTC 2015 : 1846
Fri Sep 18 22:10:43 UTC 2015 : 1846
Fri Sep 18 22:10:54 UTC 2015 : 1846
Fri Sep 18 22:11:05 UTC 2015 : 1938
Fri Sep 18 22:11:16 UTC 2015 : 1938
Fri Sep 18 22:11:28 UTC 2015 : 1938
Fri Sep 18 22:11:39 UTC 2015 : 1938
Fri Sep 18 22:11:49 UTC 2015 : 1946
Fri Sep 18 22:12:00 UTC 2015 : 2000
```
Sample output from promdash pod-scaling dashboard during a run with `--watch-cache=true` followed by a run with `--watch-cache=false`

|
1.0
|
Stair-stepping in pods going from Pending to Running - We'd like to get this to a more reproducable state with e2e-generated load and some consensus on what a "correct" cluster looks like. In the meantime, here's what we've been seeing so far.
Provision a cluster in AWS using terraform + ansible
- used https://github.com/Samsung-AG/kraken with terraform.tfvars
```
node_count = 100
aws_apiserver_type = "m4.2xlarge"
aws_master_type = "m4.2xlarge"
aws_etcd_type = "m4.2xlarge"
aws_special_node_type = "m3.xlarge"
aws_node_type = "m3.medium"
kubernetes_binaries_uri = "https://s3-us-west-2.amazonaws.com/sundry-automata/hyperkube/v1.1.0-alpha.1.1007%2Bcad5f03311afff-dirty"
```
- master is nginx + controller-manager + scheduler
- apiserver talks directly to etcd node, no local etcd proxy used
- special node is reserved for expensive services via label selectors
Setup a "pause" replication controller that creates gcr.io/google_containers/pause:go pods
- used https://github.com/Samsung-AG/kraken-services/blob/master/pause/controller.yaml
Manually scale the number of pause pods to 2000.
```kubectl scale --replicas=10 rc pause```
Replication controller creates all pod resources in Pending by the time the `kubectl` call has returned
Total time for all pods to go to Running is ~6m. Initial burst of >1000 pods goes from Pending to Running within the first ~2m.
Then periodic stairsteps of total number of pods in Running as the scheduler binds pods in chunks
- using prometheus to gather metrics from all kubernetes components, and etcd
- using promdash dashboard: https://github.com/Samsung-AG/kraken-services/blob/master/prometheus/build/promdash/pod-scaling.json
- no observable exhaustion of resources, just period peaks which correspond to pods moving from Pending to Running
There appears to be a ~30s interval during which the scheduler goes quiet and then kicks back to life
The thinking is... there may be some rate limiters and magic constants tuned to hold back a thundering herd. We'd like to remove those and see if we can find where that moves the raw resource bottleneck (eg: compute, network, memory, etc).
Verified with no other services running via kubectl polling
```
$ k get pods | grep -v pause
NAME READY STATUS RESTARTS AGE
$ k get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.100.0.1 443/TCP
```
Scale up to 2000 pods then poll every 10s
```
#!/bin/bash
date -u;
time kubectl scale --replicas=2000 rc pause;
date -u;
while true; do
echo $(date -u) : $(kubectl get pods --selector=k8s-app=pause | grep -ic running);
sleep 10;
done
date -u
```
Sample output
```
$ ./scale-up.sh
Fri Sep 18 22:05:37 UTC 2015
scaled
real 0m5.213s
user 0m0.019s
sys 0m0.009s
Fri Sep 18 22:05:42 UTC 2015
Fri Sep 18 22:05:42 UTC 2015 : 302
Fri Sep 18 22:05:53 UTC 2015 : 1117
Fri Sep 18 22:06:04 UTC 2015 : 1118
Fri Sep 18 22:06:14 UTC 2015 : 1118
Fri Sep 18 22:06:25 UTC 2015 : 1118
Fri Sep 18 22:06:35 UTC 2015 : 1118
Fri Sep 18 22:06:46 UTC 2015 : 1118
Fri Sep 18 22:06:57 UTC 2015 : 1275
Fri Sep 18 22:07:07 UTC 2015 : 1275
Fri Sep 18 22:07:18 UTC 2015 : 1275
Fri Sep 18 22:07:29 UTC 2015 : 1275
Fri Sep 18 22:07:39 UTC 2015 : 1275
Fri Sep 18 22:07:50 UTC 2015 : 1431
Fri Sep 18 22:08:01 UTC 2015 : 1431
Fri Sep 18 22:08:11 UTC 2015 : 1431
Fri Sep 18 22:08:22 UTC 2015 : 1431
Fri Sep 18 22:08:33 UTC 2015 : 1579
Fri Sep 18 22:08:43 UTC 2015 : 1588
Fri Sep 18 22:08:54 UTC 2015 : 1588
Fri Sep 18 22:09:05 UTC 2015 : 1588
Fri Sep 18 22:09:15 UTC 2015 : 1588
Fri Sep 18 22:09:26 UTC 2015 : 1738
Fri Sep 18 22:09:39 UTC 2015 : 1738
Fri Sep 18 22:09:50 UTC 2015 : 1738
Fri Sep 18 22:10:01 UTC 2015 : 1738
Fri Sep 18 22:10:11 UTC 2015 : 1777
Fri Sep 18 22:10:22 UTC 2015 : 1846
Fri Sep 18 22:10:33 UTC 2015 : 1846
Fri Sep 18 22:10:43 UTC 2015 : 1846
Fri Sep 18 22:10:54 UTC 2015 : 1846
Fri Sep 18 22:11:05 UTC 2015 : 1938
Fri Sep 18 22:11:16 UTC 2015 : 1938
Fri Sep 18 22:11:28 UTC 2015 : 1938
Fri Sep 18 22:11:39 UTC 2015 : 1938
Fri Sep 18 22:11:49 UTC 2015 : 1946
Fri Sep 18 22:12:00 UTC 2015 : 2000
```
Sample output from promdash pod-scaling dashboard during a run with `--watch-cache=true` followed by a run with `--watch-cache=false`

|
non_process
|
stair stepping in pods going from pending to running we d like to get this to a more reproducable state with generated load and some consensus on what a correct cluster looks like in the meantime here s what we ve been seeing so far provision a cluster in aws using terraform ansible used with terraform tfvars node count aws apiserver type aws master type aws etcd type aws special node type xlarge aws node type medium kubernetes binaries uri master is nginx controller manager scheduler apiserver talks directly to etcd node no local etcd proxy used special node is reserved for expensive services via label selectors setup a pause replication controller that creates gcr io google containers pause go pods used manually scale the number of pause pods to kubectl scale replicas rc pause replication controller creates all pod resources in pending by the time the kubectl call has returned total time for all pods to go to running is initial burst of pods goes from pending to running within the first then periodic stairsteps of total number of pods in running as the scheduler binds pods in chunks using prometheus to gather metrics from all kubernetes components and etcd using promdash dashboard no observable exhaustion of resources just period peaks which correspond to pods moving from pending to running there appears to be a interval during which the scheduler goes quiet and then kicks back to life the thinking is there may be some rate limiters and magic constants tuned to hold back a thundering herd we d like to remove those and see if we can find where that moves the raw resource bottleneck eg compute network memory etc verified with no other services running via kubectl polling k get pods grep v pause name ready status restarts age k get services name labels selector ip s port s kubernetes component apiserver provider kubernetes tcp scale up to pods then poll every bin bash date u time kubectl scale replicas rc pause date u while true do echo date u kubectl get pods selector app pause grep ic running sleep done date u sample output scale up sh fri sep utc scaled real user sys fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc fri sep utc sample output from promdash pod scaling dashboard during a run with watch cache true followed by a run with watch cache false
| 0
|
20,727
| 27,428,050,013
|
IssuesEvent
|
2023-03-01 22:04:11
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Add k6 performance tests to release process
|
enhancement P2 process
|
### Problem
We have an initial form of [k6 perf testing](https://github.com/hashgraph/hedera-json-rpc-relay/tree/main/k6).
However, we need to expand it's usage and utilize it during release steps
### Solution
1. Confirm state of k6 tests and make tickets for any improvements need to be made
2. Run tests to get soft assessments of load support across the applications and the performance of different methods
3. Run tests in 0.18 release steps
### Alternatives
_No response_
|
1.0
|
Add k6 performance tests to release process - ### Problem
We have an initial form of [k6 perf testing](https://github.com/hashgraph/hedera-json-rpc-relay/tree/main/k6).
However, we need to expand it's usage and utilize it during release steps
### Solution
1. Confirm state of k6 tests and make tickets for any improvements need to be made
2. Run tests to get soft assessments of load support across the applications and the performance of different methods
3. Run tests in 0.18 release steps
### Alternatives
_No response_
|
process
|
add performance tests to release process problem we have an initial form of however we need to expand it s usage and utilize it during release steps solution confirm state of tests and make tickets for any improvements need to be made run tests to get soft assessments of load support across the applications and the performance of different methods run tests in release steps alternatives no response
| 1
|
162,438
| 13,888,659,086
|
IssuesEvent
|
2020-10-19 06:41:52
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Docs] Steps to whitelist DB IP on AWS
|
Documentation Good First Issue hacktoberfest
|
## Documentation Link
https://docs.appsmith.com/core-concepts/connecting-to-databases
## Describe the problem
Today we direct users to whitelist our IP but we don't give them the steps to do so. Doing this on AWS is painful and a page dedicated to this would be helpful!
|
1.0
|
[Docs] Steps to whitelist DB IP on AWS - ## Documentation Link
https://docs.appsmith.com/core-concepts/connecting-to-databases
## Describe the problem
Today we direct users to whitelist our IP but we don't give them the steps to do so. Doing this on AWS is painful and a page dedicated to this would be helpful!
|
non_process
|
steps to whitelist db ip on aws documentation link describe the problem today we direct users to whitelist our ip but we don t give them the steps to do so doing this on aws is painful and a page dedicated to this would be helpful
| 0
|
14,641
| 17,772,087,982
|
IssuesEvent
|
2021-08-30 14:44:10
|
googleapis/python-security-private-ca
|
https://api.github.com/repos/googleapis/python-security-private-ca
|
closed
|
Migrate from master to main branch
|
type: process api: security-privateca
|
As part of the umbrella issue googleapis/google-cloud-python#10579, we need to switch the default branch from `master` to `main`. Also, all occurrences of `master` should be renamed to `main` (except in cases where URLs could be broken, because the migration has not happened yet).
|
1.0
|
Migrate from master to main branch - As part of the umbrella issue googleapis/google-cloud-python#10579, we need to switch the default branch from `master` to `main`. Also, all occurrences of `master` should be renamed to `main` (except in cases where URLs could be broken, because the migration has not happened yet).
|
process
|
migrate from master to main branch as part of the umbrella issue googleapis google cloud python we need to switch the default branch from master to main also all occurrences of master should be renamed to main except in cases where urls could be broken because the migration has not happened yet
| 1
|
46,910
| 10,000,602,905
|
IssuesEvent
|
2019-07-12 13:47:00
|
ebu/benchmarkstt
|
https://api.github.com/repos/ebu/benchmarkstt
|
closed
|
benchmarkstt cli for normalization too forgiving with parameters
|
awaiting-code-review bug highprio
|
```
benchmarkstt-tools normalization -i a.txt --lowe
```
seems to work while it shouldn't (it automagically uses the lowercase normalization
|
1.0
|
benchmarkstt cli for normalization too forgiving with parameters - ```
benchmarkstt-tools normalization -i a.txt --lowe
```
seems to work while it shouldn't (it automagically uses the lowercase normalization
|
non_process
|
benchmarkstt cli for normalization too forgiving with parameters benchmarkstt tools normalization i a txt lowe seems to work while it shouldn t it automagically uses the lowercase normalization
| 0
|
17,173
| 22,745,666,085
|
IssuesEvent
|
2022-07-07 08:57:07
|
googleapis/java-automl
|
https://api.github.com/repos/googleapis/java-automl
|
reopened
|
beta.automl.TablesPredictTest: testTablesPredict failed
|
priority: p2 type: process api: automl flakybot: issue flakybot: flaky
|
Note: #517 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: abe82997bb448c5705fedd50f91e7b4047c6ec4a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5b18e440-279f-4879-a3b0-5b9e3796df88), [Sponge](http://sponge2/5b18e440-279f-4879-a3b0-5b9e3796df88)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.automl.v1beta1.PredictionServiceClient.predict(PredictionServiceClient.java:335)
at beta.automl.TablesPredict.predict(TablesPredict.java:67)
at beta.automl.TablesPredictTest.testTablesPredict(TablesPredictTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
</pre></details>
|
1.0
|
beta.automl.TablesPredictTest: testTablesPredict failed - Note: #517 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: abe82997bb448c5705fedd50f91e7b4047c6ec4a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5b18e440-279f-4879-a3b0-5b9e3796df88), [Sponge](http://sponge2/5b18e440-279f-4879-a3b0-5b9e3796df88)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.automl.v1beta1.PredictionServiceClient.predict(PredictionServiceClient.java:335)
at beta.automl.TablesPredict.predict(TablesPredict.java:67)
at beta.automl.TablesPredictTest.testTablesPredict(TablesPredictTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
</pre></details>
|
process
|
beta automl tablespredicttest testtablespredict failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable bad gateway at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax rpc asynctaskexception asynchronous task failed at com google api gax rpc apiexceptions callandtranslateapiexception apiexceptions java at com google api gax rpc unarycallable call unarycallable java at com google cloud automl predictionserviceclient predict predictionserviceclient java at beta automl tablespredict predict tablespredict java at beta automl tablespredicttest testtablespredict tablespredicttest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by io grpc statusruntimeexception unavailable bad gateway at io grpc status asruntimeexception status java more
| 1
|
40,685
| 5,253,103,529
|
IssuesEvent
|
2017-02-02 08:21:00
|
cgstudiomap/cgstudiomap
|
https://api.github.com/repos/cgstudiomap/cgstudiomap
|
closed
|
En tant que Webdesigner, je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution.
|
4 - Done bug design
|
- [x] changer la hauteur
- [x] Taille Texte placeholder recherche
- [x] Largeur Barre de recherche Responsive
- [x] Meilleure UX pour du TOUCH (identifier les zones d'interactions)
- [x] Revoir les espaces.
<!---
@huboard:{"order":5.548312355541528e-19,"milestone_order":3.662784219667312e-55,"custom_state":"ready"}
-->
|
1.0
|
En tant que Webdesigner, je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution. - - [x] changer la hauteur
- [x] Taille Texte placeholder recherche
- [x] Largeur Barre de recherche Responsive
- [x] Meilleure UX pour du TOUCH (identifier les zones d'interactions)
- [x] Revoir les espaces.
<!---
@huboard:{"order":5.548312355541528e-19,"milestone_order":3.662784219667312e-55,"custom_state":"ready"}
-->
|
non_process
|
en tant que webdesigner je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution changer la hauteur taille texte placeholder recherche largeur barre de recherche responsive meilleure ux pour du touch identifier les zones d interactions revoir les espaces huboard order milestone order custom state ready
| 0
|
2,124
| 4,964,425,463
|
IssuesEvent
|
2016-12-03 19:25:59
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] Discours Terrorisme et Sécurité
|
Process: [6] Approved
|
# Video title
MÉLENCHON - UN AN APRÈS LE 13 NOVEMBRE 2015 : MA RÉPONSE FACE AU «TERRORISME»
# URL
https://www.youtube.com/watch?v=osk854gGWLA&t=443s
# Youtube subtitles language
Français
# Duration
52:10
# URL subtitles
https://www.youtube.com/timedtext_editor?tab=captions&v=osk854gGWLA&ui=hd&lang=fr&action_mde_edit_form=1&forceedit=timedtext&bl=vmp&ref=player
|
1.0
|
[subtitles] [FR] Discours Terrorisme et Sécurité - # Video title
MÉLENCHON - UN AN APRÈS LE 13 NOVEMBRE 2015 : MA RÉPONSE FACE AU «TERRORISME»
# URL
https://www.youtube.com/watch?v=osk854gGWLA&t=443s
# Youtube subtitles language
Français
# Duration
52:10
# URL subtitles
https://www.youtube.com/timedtext_editor?tab=captions&v=osk854gGWLA&ui=hd&lang=fr&action_mde_edit_form=1&forceedit=timedtext&bl=vmp&ref=player
|
process
|
discours terrorisme et sécurité video title mélenchon un an après le novembre ma réponse face au «terrorisme» url youtube subtitles language français duration url subtitles
| 1
|
101,554
| 12,693,949,566
|
IssuesEvent
|
2020-06-22 05:10:46
|
SmartResponse-Framework/SmartResponse.Framework
|
https://api.github.com/repos/SmartResponse-Framework/SmartResponse.Framework
|
closed
|
Implementing Enums in PowerShell
|
Design Topic enhancement
|
Lots of use cases for enums - for example, "List Type". In more robust languages, this would be represented by an enum - something we could still implement with .net. Something I will want to test and possibly implement...
Usage would look similar to:
```powershell
[Lr.ListType]::GeneralValue
```
|
1.0
|
Implementing Enums in PowerShell - Lots of use cases for enums - for example, "List Type". In more robust languages, this would be represented by an enum - something we could still implement with .net. Something I will want to test and possibly implement...
Usage would look similar to:
```powershell
[Lr.ListType]::GeneralValue
```
|
non_process
|
implementing enums in powershell lots of use cases for enums for example list type in more robust languages this would be represented by an enum something we could still implement with net something i will want to test and possibly implement usage would look similar to powershell generalvalue
| 0
|
2,411
| 5,198,034,153
|
IssuesEvent
|
2017-01-23 17:03:05
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
opened
|
stop storing index for symbols in SymbolTable
|
enhancement preprocessor
|
We store the variable `size` when we can have access to this info by changing `symbol_table` into a vector and use the size method when we need to know how many symbols are present
|
1.0
|
stop storing index for symbols in SymbolTable - We store the variable `size` when we can have access to this info by changing `symbol_table` into a vector and use the size method when we need to know how many symbols are present
|
process
|
stop storing index for symbols in symboltable we store the variable size when we can have access to this info by changing symbol table into a vector and use the size method when we need to know how many symbols are present
| 1
|
189,751
| 14,521,181,217
|
IssuesEvent
|
2020-12-14 06:55:52
|
HumanBrainProject/interactive-viewer
|
https://api.github.com/repos/HumanBrainProject/interactive-viewer
|
closed
|
[Bug] After change connectivity source, if new source does not have data it breaks
|
bug needs test v2.3.0
|
Steps to reproduce:
1. Load Cytoarchitectonic maps - v1.18
2. Select region "Area STS2 (STS) - right hemisphere"
3. Expand the connectivity
4. change connectivity from "1000BRAINS study" to "Averaged_FC_JuBrain_184Regions"
Expected Behavior
Instead of a connectivity diagram, it should return a message that connectivity is not available for the current region
actual Bihevior
It keeps "1000BRAINS study" diagram on the screen
|
1.0
|
[Bug] After change connectivity source, if new source does not have data it breaks - Steps to reproduce:
1. Load Cytoarchitectonic maps - v1.18
2. Select region "Area STS2 (STS) - right hemisphere"
3. Expand the connectivity
4. change connectivity from "1000BRAINS study" to "Averaged_FC_JuBrain_184Regions"
Expected Behavior
Instead of a connectivity diagram, it should return a message that connectivity is not available for the current region
actual Bihevior
It keeps "1000BRAINS study" diagram on the screen
|
non_process
|
after change connectivity source if new source does not have data it breaks steps to reproduce load cytoarchitectonic maps select region area sts right hemisphere expand the connectivity change connectivity from study to averaged fc jubrain expected behavior instead of a connectivity diagram it should return a message that connectivity is not available for the current region actual bihevior it keeps study diagram on the screen
| 0
|
2,292
| 5,114,088,959
|
IssuesEvent
|
2017-01-06 17:18:54
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [eng] #RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
|
Language: English Process: [0] Awaiting subtitles
|
Video title
#RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
# URL
https://www.youtube.com/watch?v=hIrpyKHXry8
# Youtube subtitles language
Anglais
# Duration
26:42
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=hIrpyKHXry8&tab=captions&ref=player&action_mde_edit_form=1&ui=hd&lang=en&bl=vmp
|
1.0
|
[subtitles] [eng] #RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ - Video title
#RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
# URL
https://www.youtube.com/watch?v=hIrpyKHXry8
# Youtube subtitles language
Anglais
# Duration
26:42
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=hIrpyKHXry8&tab=captions&ref=player&action_mde_edit_form=1&ui=hd&lang=en&bl=vmp
|
process
|
conditions de travail auchan mulliez impôt syrie trophée abonnés annonce faq video title conditions de travail auchan mulliez impôt syrie trophée abonnés annonce faq url youtube subtitles language anglais duration subtitles url
| 1
|
27,687
| 11,551,318,540
|
IssuesEvent
|
2020-02-19 01:05:05
|
doc-ai/react-native-animatable
|
https://api.github.com/repos/doc-ai/react-native-animatable
|
opened
|
WS-2019-0331 (Medium) detected in handlebars-4.1.2.tgz
|
security vulnerability
|
## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /react-native-animatable/package.json</p>
<p>Path to vulnerable library: /tmp/git/react-native-animatable/node_modules/handlebars/package.json,/tmp/git/react-native-animatable/node_modules/handlebars/package.json,/tmp/git/react-native-animatable/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-22.3.0.tgz (Root Library)
- jest-cli-22.4.4.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"jest:22.3.0;jest-cli:22.4.4;istanbul-api:1.3.7;istanbul-reports:1.5.1;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.5.2"}],"vulnerabilityIdentifier":"WS-2019-0331","vulnerabilityDetails":"Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0331 (Medium) detected in handlebars-4.1.2.tgz - ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /react-native-animatable/package.json</p>
<p>Path to vulnerable library: /tmp/git/react-native-animatable/node_modules/handlebars/package.json,/tmp/git/react-native-animatable/node_modules/handlebars/package.json,/tmp/git/react-native-animatable/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-22.3.0.tgz (Root Library)
- jest-cli-22.4.4.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"jest:22.3.0;jest-cli:22.4.4;istanbul-api:1.3.7;istanbul-reports:1.5.1;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.5.2"}],"vulnerabilityIdentifier":"WS-2019-0331","vulnerabilityDetails":"Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
|
non_process
|
ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file react native animatable package json path to vulnerable library tmp git react native animatable node modules handlebars package json tmp git react native animatable node modules handlebars package json tmp git react native animatable node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system vulnerabilityurl
| 0
|
2,647
| 5,426,844,732
|
IssuesEvent
|
2017-03-03 11:19:35
|
inasafe/inasafe
|
https://api.github.com/repos/inasafe/inasafe
|
closed
|
Add displacement rates and notes
|
Awaiting feedback Current sprint Postprocessing
|
We need to consult experts to find displacement rates for all hazards and then update notes to explain the rates that are assigned
|
1.0
|
Add displacement rates and notes - We need to consult experts to find displacement rates for all hazards and then update notes to explain the rates that are assigned
|
process
|
add displacement rates and notes we need to consult experts to find displacement rates for all hazards and then update notes to explain the rates that are assigned
| 1
|
22,401
| 31,142,290,301
|
IssuesEvent
|
2023-08-16 01:44:35
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: Error: Different value of snapshot
|
OS: linux process: flaky test topic: flake ❄️ stage: flake stale
|
### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41413/workflows/cb7fdf54-b7ee-4876-90f0-d7d1f2301174/jobs/1714981
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/visit_spec.js#L176
### Analysis
<img width="1002" alt="Screen Shot 2022-08-05 at 12 53 51 PM" src="https://user-images.githubusercontent.com/26726429/183150934-570be263-7559-4a92-8799-39f273b72a94.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: Error: Different value of snapshot - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41413/workflows/cb7fdf54-b7ee-4876-90f0-d7d1f2301174/jobs/1714981
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/visit_spec.js#L176
### Analysis
<img width="1002" alt="Screen Shot 2022-08-05 at 12 53 51 PM" src="https://user-images.githubusercontent.com/26726429/183150934-570be263-7559-4a92-8799-39f273b72a94.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test error different value of snapshot link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
10,023
| 13,043,926,220
|
IssuesEvent
|
2020-07-29 03:04:22
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Instr` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Instr` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Instr` from TiDB -
## Description
Port the scalar function `Instr` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function instr from tidb description port the scalar function instr from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
6,812
| 9,956,571,117
|
IssuesEvent
|
2019-07-05 14:16:05
|
threefoldtech/rivine
|
https://api.github.com/repos/threefoldtech/rivine
|
closed
|
authcoin: Wrong address is identified as unauthorized when the sender is not authorized
|
process_wontfix type_bug
|
```
goldchainc wallet send coins 017d8b80d279dc691f3d1af12a6419bde0738e4d3bb406371756a77b657841a0c06c206c98c8a9 300000
Could not send coins: HTTP 403 error: error after call to /wallet/coins: User Error forbidden: unauthorized address 017d8b80d279dc691f3d1af12a6419bde0738e4d3bb406371756a77b657841a0c06c206c98c8a9 cannot participate in a coin transfer
```
In the above command, the receiver was previously authorized, but the sender was not. Still the error claims the receiver is unauthorized. After authorizing the sender, the transaction succeeded
|
1.0
|
authcoin: Wrong address is identified as unauthorized when the sender is not authorized - ```
goldchainc wallet send coins 017d8b80d279dc691f3d1af12a6419bde0738e4d3bb406371756a77b657841a0c06c206c98c8a9 300000
Could not send coins: HTTP 403 error: error after call to /wallet/coins: User Error forbidden: unauthorized address 017d8b80d279dc691f3d1af12a6419bde0738e4d3bb406371756a77b657841a0c06c206c98c8a9 cannot participate in a coin transfer
```
In the above command, the receiver was previously authorized, but the sender was not. Still the error claims the receiver is unauthorized. After authorizing the sender, the transaction succeeded
|
process
|
authcoin wrong address is identified as unauthorized when the sender is not authorized goldchainc wallet send coins could not send coins http error error after call to wallet coins user error forbidden unauthorized address cannot participate in a coin transfer in the above command the receiver was previously authorized but the sender was not still the error claims the receiver is unauthorized after authorizing the sender the transaction succeeded
| 1
|
671
| 3,143,682,152
|
IssuesEvent
|
2015-09-14 08:48:08
|
arduino/Arduino
|
https://api.github.com/repos/arduino/Arduino
|
closed
|
Prototypes for methods with default arguments are not generated [imported]
|
Component: Preprocessor
|
This is [Issue 386](http://code.google.com/p/arduino/issues/detail?id=386) moved from a Google Code project.
Added by 2010-10-23T13:23:50.000Z by [andre...@gmail.com](http://code.google.com/u/101138032102755792048/).
Please review that bug for more context and additional comments, but update this bug.
Original labels: Type-Defect, Priority-Medium, Component-PreProcessor
### Original description
Steps to reproduce the issue:
STEP 1. Insert the following code into a blank sketch:
```C++
void setup() {
test(42);
test2(42);
}
void loop() {}
void test(int arg) {};
void test2(int arg = 0) {};
```
STEP 2: Click the verify button and observe the error message:
```
DefaultArgBugDemo.cpp: In function 'void setup()':
DefaultArgBugDemo:2: error: 'test2' was not declared in this scope
```
STEP 3:
Examine the generated *.cpp file and observe that a prototype has been generated for test1 but not test2:
```C++
#include "WProgram.h"
void setup();
void loop();
void test(int arg);
void setup() {
test(42);
test2(42);
}
void loop() {}
void test(int arg) {};
void test2(int arg = 0) {};
```
**What is the expected output? What do you see instead?**
You would expect a prototype to be generated for test2.
**What version of the Arduino software are you using? On what operating**
**system? Which Arduino board are you using?**
Tested in 0018 and 0021. Running on Win7 Home Premium (though the issue likely exists on all platforms)
**Please provide any additional information below.**
Of course the workaround is relatively simple (just define the prototype yourself) but it would be nice for all methods to be treated equally.
|
1.0
|
Prototypes for methods with default arguments are not generated [imported] - This is [Issue 386](http://code.google.com/p/arduino/issues/detail?id=386) moved from a Google Code project.
Added by 2010-10-23T13:23:50.000Z by [andre...@gmail.com](http://code.google.com/u/101138032102755792048/).
Please review that bug for more context and additional comments, but update this bug.
Original labels: Type-Defect, Priority-Medium, Component-PreProcessor
### Original description
Steps to reproduce the issue:
STEP 1. Insert the following code into a blank sketch:
```C++
void setup() {
test(42);
test2(42);
}
void loop() {}
void test(int arg) {};
void test2(int arg = 0) {};
```
STEP 2: Click the verify button and observe the error message:
```
DefaultArgBugDemo.cpp: In function 'void setup()':
DefaultArgBugDemo:2: error: 'test2' was not declared in this scope
```
STEP 3:
Examine the generated *.cpp file and observe that a prototype has been generated for test1 but not test2:
```C++
#include "WProgram.h"
void setup();
void loop();
void test(int arg);
void setup() {
test(42);
test2(42);
}
void loop() {}
void test(int arg) {};
void test2(int arg = 0) {};
```
**What is the expected output? What do you see instead?**
You would expect a prototype to be generated for test2.
**What version of the Arduino software are you using? On what operating**
**system? Which Arduino board are you using?**
Tested in 0018 and 0021. Running on Win7 Home Premium (though the issue likely exists on all platforms)
**Please provide any additional information below.**
Of course the workaround is relatively simple (just define the prototype yourself) but it would be nice for all methods to be treated equally.
|
process
|
prototypes for methods with default arguments are not generated this is moved from a google code project added by by please review that bug for more context and additional comments but update this bug original labels type defect priority medium component preprocessor original description steps to reproduce the issue step insert the following code into a blank sketch c void setup test void loop void test int arg void int arg step click the verify button and observe the error message defaultargbugdemo cpp in function void setup defaultargbugdemo error was not declared in this scope step examine the generated cpp file and observe that a prototype has been generated for but not c include wprogram h void setup void loop void test int arg void setup test void loop void test int arg void int arg what is the expected output what do you see instead you would expect a prototype to be generated for what version of the arduino software are you using on what operating system which arduino board are you using tested in and running on home premium though the issue likely exists on all platforms please provide any additional information below of course the workaround is relatively simple just define the prototype yourself but it would be nice for all methods to be treated equally
| 1
|
7,949
| 11,137,529,759
|
IssuesEvent
|
2019-12-20 19:36:43
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
DoS application: Track submit and update dates
|
Apply Process Artifact Needed Data Requirements Ready State Dept.
|
Who: System Admin
What: Log application data
Why: In order to supply data when requested
Acceptance Criteria:
Issue: students are submitted their applications, then making updates but not submitting again. Tracking data will help us support applicants that have questions.
- Log the date when an application is first submitted
- Log the date when an application is updated
|
1.0
|
DoS application: Track submit and update dates - Who: System Admin
What: Log application data
Why: In order to supply data when requested
Acceptance Criteria:
Issue: students are submitted their applications, then making updates but not submitting again. Tracking data will help us support applicants that have questions.
- Log the date when an application is first submitted
- Log the date when an application is updated
|
process
|
dos application track submit and update dates who system admin what log application data why in order to supply data when requested acceptance criteria issue students are submitted their applications then making updates but not submitting again tracking data will help us support applicants that have questions log the date when an application is first submitted log the date when an application is updated
| 1
|
49,747
| 20,905,232,178
|
IssuesEvent
|
2022-03-24 00:54:37
|
vmware/singleton
|
https://api.github.com/repos/vmware/singleton
|
opened
|
[BUG] [Lite]sourceFormat value can't accept lower-case in string-based GET/POST APIs.
|
kind/bug area/java-service priority/medium
|
**Describe the bug**
commit: 5a1f8ee27a655a7e9a60c5c7140e531565a4d970
sourceFormat value can't accept lower-case in string-based GET/POST APIs.
The issue is reproducible in all v1/v2 string-based GET/POST APIs.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'GET
[/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]'
2. Request a key with "md" sourceFormat as below:
http://localhost:8091/i18n/api/v2/translation/products/VMwareVIP2018/versions/1.1.0/locales/en/components/markdown/keys/md.key-1?collectSource=false&pseudo=true&sourceFormat=md
3. See error
```
{
"response": {
"code": 400,
"message": "Incorrect sourceformat(only allows empty, MD, HTML, SVG)",
"serverTime": ""
},
"signature": "",
"data": null
}
```
**Expected behavior**
sourceFormat value should be case insensitive in string-based GET/POST APIs.
Can accept "String", "md", "Md", "mD" such value.
|
1.0
|
[BUG] [Lite]sourceFormat value can't accept lower-case in string-based GET/POST APIs. - **Describe the bug**
commit: 5a1f8ee27a655a7e9a60c5c7140e531565a4d970
sourceFormat value can't accept lower-case in string-based GET/POST APIs.
The issue is reproducible in all v1/v2 string-based GET/POST APIs.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'GET
[/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]'
2. Request a key with "md" sourceFormat as below:
http://localhost:8091/i18n/api/v2/translation/products/VMwareVIP2018/versions/1.1.0/locales/en/components/markdown/keys/md.key-1?collectSource=false&pseudo=true&sourceFormat=md
3. See error
```
{
"response": {
"code": 400,
"message": "Incorrect sourceformat(only allows empty, MD, HTML, SVG)",
"serverTime": ""
},
"signature": "",
"data": null
}
```
**Expected behavior**
sourceFormat value should be case insensitive in string-based GET/POST APIs.
Can accept "String", "md", "Md", "mD" such value.
|
non_process
|
sourceformat value can t accept lower case in string based get post apis describe the bug commit sourceformat value can t accept lower case in string based get post apis the issue is reproducible in all string based get post apis to reproduce steps to reproduce the behavior go to get request a key with md sourceformat as below see error response code message incorrect sourceformat only allows empty md html svg servertime signature data null expected behavior sourceformat value should be case insensitive in string based get post apis can accept string md md md such value
| 0
|
116,614
| 17,380,519,420
|
IssuesEvent
|
2021-07-31 16:03:20
|
AlexRogalskiy/charts
|
https://api.github.com/repos/AlexRogalskiy/charts
|
closed
|
CVE-2020-11022 (Medium) detected in jquery-1.9.1.js, jquery-1.8.1.min.js - autoclosed
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: charts/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: charts/node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/8eeb0a90c1dd538ae1c6136eb70230b3c3695d4c">8eeb0a90c1dd538ae1c6136eb70230b3c3695d4c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.9.1.js, jquery-1.8.1.min.js - autoclosed - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: charts/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: charts/node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/8eeb0a90c1dd538ae1c6136eb70230b3c3695d4c">8eeb0a90c1dd538ae1c6136eb70230b3c3695d4c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery js jquery min js autoclosed cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery js javascript library for dom operations library home page a href path to dependency file charts node modules tinygradient bower components tinycolor index html path to vulnerable library charts node modules tinygradient bower components tinycolor demo jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file charts node modules redeyed examples browser index html path to vulnerable library charts node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
179,095
| 21,514,625,266
|
IssuesEvent
|
2022-04-28 08:45:27
|
ShaikUsaf/packages_apps_Bluetooth_AOSP10_r33_CVE-2021-0329
|
https://api.github.com/repos/ShaikUsaf/packages_apps_Bluetooth_AOSP10_r33_CVE-2021-0329
|
opened
|
CVE-2020-0177 (Medium) detected in Bluetoothandroid-10.0.0_r30
|
security vulnerability
|
## CVE-2020-0177 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Bluetoothandroid-10.0.0_r30</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Bluetooth>https://android.googlesource.com/platform/packages/apps/Bluetooth</a></p>
<p>Found in HEAD commit: <a href="https://github.com/ShaikUsaf/packages_apps_Bluetooth_AOSP10_r33_CVE-2021-0329/commit/b79222eebaa5faccf8838632a7540189ba870400">b79222eebaa5faccf8838632a7540189ba870400</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/bluetooth/pan/PanService.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In connect() of PanService.java, there is a possible permissions bypass. This could lead to local escalation of privilege to change network connection settings with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10Android ID: A-126206353
<p>Publish Date: 2020-06-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-0177>CVE-2020-0177</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-0177">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-0177</a></p>
<p>Release Date: 2020-06-11</p>
<p>Fix Resolution: android-10.0.0_r37</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-0177 (Medium) detected in Bluetoothandroid-10.0.0_r30 - ## CVE-2020-0177 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Bluetoothandroid-10.0.0_r30</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Bluetooth>https://android.googlesource.com/platform/packages/apps/Bluetooth</a></p>
<p>Found in HEAD commit: <a href="https://github.com/ShaikUsaf/packages_apps_Bluetooth_AOSP10_r33_CVE-2021-0329/commit/b79222eebaa5faccf8838632a7540189ba870400">b79222eebaa5faccf8838632a7540189ba870400</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/bluetooth/pan/PanService.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In connect() of PanService.java, there is a possible permissions bypass. This could lead to local escalation of privilege to change network connection settings with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10Android ID: A-126206353
<p>Publish Date: 2020-06-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-0177>CVE-2020-0177</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-0177">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-0177</a></p>
<p>Release Date: 2020-06-11</p>
<p>Fix Resolution: android-10.0.0_r37</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bluetoothandroid cve medium severity vulnerability vulnerable library bluetoothandroid library home page a href found in head commit a href found in base branch master vulnerable source files src com android bluetooth pan panservice java vulnerability details in connect of panservice java there is a possible permissions bypass this could lead to local escalation of privilege to change network connection settings with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with whitesource
| 0
|
6,704
| 9,814,894,719
|
IssuesEvent
|
2019-06-13 11:18:39
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
SAGA Supervised Classification: add option "Get class statistics from"
|
Feature Request Processing
|
Author Name: **Luca Congedo** (Luca Congedo)
Original Redmine Issue: [8535](https://issues.qgis.org/issues/8535)
Redmine category:processing/saga
Assignee: Victor Olaya
---
Hello everybody,
it would be good (at least for the Semi-Automatic Classification Plugin) to allow for the selection of the statistics source (STATS_SRC) for the supervised classification and the selection of the table containing class statistics (STATS).
Maybe something like this:
```
ParameterSelection|STATS_SRC|Get class statistics from|[0] Training Areas;[1] Class Table
```
This way the classification process would be rapider because class statistics can be calculated only once, saved in a .txt file, and used for the following classifications.
Also, it allows for the classification of image subsets that are outside the extension of training areas.
Thank you.
Cheers!
|
1.0
|
SAGA Supervised Classification: add option "Get class statistics from" - Author Name: **Luca Congedo** (Luca Congedo)
Original Redmine Issue: [8535](https://issues.qgis.org/issues/8535)
Redmine category:processing/saga
Assignee: Victor Olaya
---
Hello everybody,
it would be good (at least for the Semi-Automatic Classification Plugin) to allow for the selection of the statistics source (STATS_SRC) for the supervised classification and the selection of the table containing class statistics (STATS).
Maybe something like this:
```
ParameterSelection|STATS_SRC|Get class statistics from|[0] Training Areas;[1] Class Table
```
This way the classification process would be rapider because class statistics can be calculated only once, saved in a .txt file, and used for the following classifications.
Also, it allows for the classification of image subsets that are outside the extension of training areas.
Thank you.
Cheers!
|
process
|
saga supervised classification add option get class statistics from author name luca congedo luca congedo original redmine issue redmine category processing saga assignee victor olaya hello everybody it would be good at least for the semi automatic classification plugin to allow for the selection of the statistics source stats src for the supervised classification and the selection of the table containing class statistics stats maybe something like this parameterselection stats src get class statistics from training areas class table this way the classification process would be rapider because class statistics can be calculated only once saved in a txt file and used for the following classifications also it allows for the classification of image subsets that are outside the extension of training areas thank you cheers
| 1
|
22,113
| 30,643,796,634
|
IssuesEvent
|
2023-07-25 01:43:37
|
bazelbuild/stardoc
|
https://api.github.com/repos/bazelbuild/stardoc
|
closed
|
Migrate stardoc source code from bazelbuild/bazel
|
type: process P2
|
I was trying to see what it would take to implement #27 but cannot find the source code. I'm assuming it's buried somewhere in [bazelbuild/bazel](https://github.com/bazelbuild/bazel). If this is the case, is there another issue tracking splitting that source code out into this repo? Also, if this is true, would it also be possible to update the [CONTRIBUTING.md](https://github.com/bazelbuild/stardoc/blob/master/CONTRIBUTING.md) doc to call this out?
|
1.0
|
Migrate stardoc source code from bazelbuild/bazel - I was trying to see what it would take to implement #27 but cannot find the source code. I'm assuming it's buried somewhere in [bazelbuild/bazel](https://github.com/bazelbuild/bazel). If this is the case, is there another issue tracking splitting that source code out into this repo? Also, if this is true, would it also be possible to update the [CONTRIBUTING.md](https://github.com/bazelbuild/stardoc/blob/master/CONTRIBUTING.md) doc to call this out?
|
process
|
migrate stardoc source code from bazelbuild bazel i was trying to see what it would take to implement but cannot find the source code i m assuming it s buried somewhere in if this is the case is there another issue tracking splitting that source code out into this repo also if this is true would it also be possible to update the doc to call this out
| 1
|
5,348
| 2,574,581,319
|
IssuesEvent
|
2015-02-11 17:44:39
|
Nsomnia/ColdWarSubSim
|
https://api.github.com/repos/Nsomnia/ColdWarSubSim
|
opened
|
Model the FWD Compartment (Torpedo room)
|
3D Modelling Related help wanted Medium Priority
|
Model the forward compartment, focusing on anything torpedo related.
|
1.0
|
Model the FWD Compartment (Torpedo room) - Model the forward compartment, focusing on anything torpedo related.
|
non_process
|
model the fwd compartment torpedo room model the forward compartment focusing on anything torpedo related
| 0
|
13,295
| 15,769,253,877
|
IssuesEvent
|
2021-03-31 18:07:43
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Add PR review/merging process to contributing.md
|
process: contributing stage: ready for work
|
Document a standardized process for reviewing and merging PRs
|
1.0
|
Add PR review/merging process to contributing.md - Document a standardized process for reviewing and merging PRs
|
process
|
add pr review merging process to contributing md document a standardized process for reviewing and merging prs
| 1
|
332,266
| 29,193,539,016
|
IssuesEvent
|
2023-05-19 23:19:18
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix manipulation.test_top_k
|
Sub Task Ivy API Experimental Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
1.0
|
Fix manipulation.test_top_k - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5026791776/jobs/9015486160" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
non_process
|
fix manipulation test top k tensorflow img src torch img src numpy img src jax img src paddle img src not found not found
| 0
|
21,936
| 30,446,798,266
|
IssuesEvent
|
2023-07-15 19:28:28
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pyutils 0.0.1b6 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b6",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b6/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpv3mij_db/pyutils"
}
}```
|
1.0
|
pyutils 0.0.1b6 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b6",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b6/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpv3mij_db/pyutils"
}
}```
|
process
|
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils src pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp db pyutils
| 1
|
15,117
| 18,851,605,253
|
IssuesEvent
|
2021-11-11 21:42:13
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
[Contributor docs] Document development and testing processes for google-cloud-firestore
|
type: process api: firestore
|
Enumerate topics and write an initial document for contributors to google-cloud-firestore. At a high level, this should include at least:
* How to run local unit tests
* How to set up and run integration/acceptance/samples tests, including remote project setup, fixtures, and connections to other services
* How to write tests for new features
* Other things we check during CI (e.g. rubocop, yard tests, etc.)
* What is expected when opening a pull request (e.g. conventional commits, CLA)
* Anything else you can think of
|
1.0
|
[Contributor docs] Document development and testing processes for google-cloud-firestore - Enumerate topics and write an initial document for contributors to google-cloud-firestore. At a high level, this should include at least:
* How to run local unit tests
* How to set up and run integration/acceptance/samples tests, including remote project setup, fixtures, and connections to other services
* How to write tests for new features
* Other things we check during CI (e.g. rubocop, yard tests, etc.)
* What is expected when opening a pull request (e.g. conventional commits, CLA)
* Anything else you can think of
|
process
|
document development and testing processes for google cloud firestore enumerate topics and write an initial document for contributors to google cloud firestore at a high level this should include at least how to run local unit tests how to set up and run integration acceptance samples tests including remote project setup fixtures and connections to other services how to write tests for new features other things we check during ci e g rubocop yard tests etc what is expected when opening a pull request e g conventional commits cla anything else you can think of
| 1
|
13,249
| 15,721,714,759
|
IssuesEvent
|
2021-03-29 03:52:56
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
RTSP转RTMP压测崩溃
|
#Bug *In process *Waiting reply
|
### 测试环境
- 系统: ubuntu 20.04
- 内存:16G
- CPU: 4.01 GHz 四核Intel Core i7
- 网络: 127.0.0.1本地循环网络
- 测试客户端:srs-bench
- 测试版本:lal_v0.19.1_linux从release中下载。默认参数
### 重现步骤
- rtsp推流使用udp或者tcp方式都可以
`ffmpeg -re -stream_loop -1 -i "bbb_sunflower_1080p_30fps_normal.mp4" -an -vcodec copy -rtsp_transport tcp -f rtsp rtsp://localhost:5544/live/livestream`
- srs-bench压测500路,等待全部加载完成。然后 srs-bench 界面ctrl+c 结束压测,出现崩溃。压测100路不会出现,有时候压测1000路,加载过程中直接崩溃。崩溃都是同一个异常,不太懂代码,看上去是map并发问题。
`sb_rtmp_load -c 1000 -r rtmp://127.0.0.1:1935/live/livestream`
错误日志:
`fatal error: concurrent map iteration and map write2021/03/18 12:12:13.671875 DEBUG [GROUP1] [RTMPPUBSUB75] del rtmp SubSession from group. - group.go:668`
[lal.log](https://github.com/q191201771/lal/files/6161267/lal.log)
|
1.0
|
RTSP转RTMP压测崩溃 - ### 测试环境
- 系统: ubuntu 20.04
- 内存:16G
- CPU: 4.01 GHz 四核Intel Core i7
- 网络: 127.0.0.1本地循环网络
- 测试客户端:srs-bench
- 测试版本:lal_v0.19.1_linux从release中下载。默认参数
### 重现步骤
- rtsp推流使用udp或者tcp方式都可以
`ffmpeg -re -stream_loop -1 -i "bbb_sunflower_1080p_30fps_normal.mp4" -an -vcodec copy -rtsp_transport tcp -f rtsp rtsp://localhost:5544/live/livestream`
- srs-bench压测500路,等待全部加载完成。然后 srs-bench 界面ctrl+c 结束压测,出现崩溃。压测100路不会出现,有时候压测1000路,加载过程中直接崩溃。崩溃都是同一个异常,不太懂代码,看上去是map并发问题。
`sb_rtmp_load -c 1000 -r rtmp://127.0.0.1:1935/live/livestream`
错误日志:
`fatal error: concurrent map iteration and map write2021/03/18 12:12:13.671875 DEBUG [GROUP1] [RTMPPUBSUB75] del rtmp SubSession from group. - group.go:668`
[lal.log](https://github.com/q191201771/lal/files/6161267/lal.log)
|
process
|
rtsp转rtmp压测崩溃 测试环境 系统 ubuntu 内存 cpu ghz 四核intel core 网络: 测试客户端:srs bench 测试版本:lal linux从release中下载。默认参数 重现步骤 rtsp推流使用udp或者tcp方式都可以 ffmpeg re stream loop i bbb sunflower normal an vcodec copy rtsp transport tcp f rtsp rtsp localhost live livestream srs ,等待全部加载完成。然后 srs bench 界面ctrl c 结束压测,出现崩溃。 , ,加载过程中直接崩溃。崩溃都是同一个异常,不太懂代码,看上去是map并发问题。 sb rtmp load c r rtmp live livestream 错误日志 fatal error concurrent map iteration and map debug del rtmp subsession from group group go
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.