Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
94,767
| 3,932,136,687
|
IssuesEvent
|
2016-04-25 14:50:55
|
musiqpad/mqp-server
|
https://api.github.com/repos/musiqpad/mqp-server
|
closed
|
Ask for notification permission after toggled, not on event
|
Approved Bug High priority
|
The current way that mqp works is this: after you toggle one of the desktop notifications on musiqpad, it then asks you for permission to send a notification after the event in question has been fired. An alternative to this would be, on toggle of a setting, first checking if that permission is already granted, if not, prompt the user for permission.
It doesn't seem the best to ask after the event has been fired, which is what seems to be the case.
|
1.0
|
Ask for notification permission after toggled, not on event - The current way that mqp works is this: after you toggle one of the desktop notifications on musiqpad, it then asks you for permission to send a notification after the event in question has been fired. An alternative to this would be, on toggle of a setting, first checking if that permission is already granted, if not, prompt the user for permission.
It doesn't seem the best to ask after the event has been fired, which is what seems to be the case.
|
non_defect
|
ask for notification permission after toggled not on event the current way that mqp works is this after you toggle one of the desktop notifications on musiqpad it then asks you for permission to send a notification after the event in question has been fired an alternative to this would be on toggle of a setting first checking if that permission is already granted if not prompt the user for permission it doesn t seem the best to ask after the event has been fired which is what seems to be the case
| 0
|
211,984
| 23,856,907,272
|
IssuesEvent
|
2022-09-07 01:15:31
|
artkamote/wdio-test
|
https://api.github.com/repos/artkamote/wdio-test
|
opened
|
WS-2021-0638 (High) detected in mocha-9.1.3.tgz
|
security vulnerability
|
## WS-2021-0638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mocha-9.1.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-9.1.3.tgz">https://registry.npmjs.org/mocha/-/mocha-9.1.3.tgz</a></p>
<p>Path to dependency file: /qa-tech-challenge/package.json</p>
<p>Path to vulnerable library: /qa-tech-challenge/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- mocha-framework-7.16.1.tgz (Root Library)
- :x: **mocha-9.1.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is regular Expression Denial of Service (ReDoS) vulnerability in mocha.
It allows cause a denial of service when stripping crafted invalid function definition from strs.
<p>Publish Date: 2021-09-18
<p>URL: <a href=https://github.com/mochajs/mocha/commit/61b4b9209c2c64b32c8d48b1761c3b9384d411ea>WS-2021-0638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/1d8a3d95-d199-4129-a6ad-8eafe5e77b9e/">https://huntr.dev/bounties/1d8a3d95-d199-4129-a6ad-8eafe5e77b9e/</a></p>
<p>Release Date: 2021-09-18</p>
<p>Fix Resolution: https://github.com/mochajs/mocha/commit/61b4b9209c2c64b32c8d48b1761c3b9384d411ea</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0638 (High) detected in mocha-9.1.3.tgz - ## WS-2021-0638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mocha-9.1.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-9.1.3.tgz">https://registry.npmjs.org/mocha/-/mocha-9.1.3.tgz</a></p>
<p>Path to dependency file: /qa-tech-challenge/package.json</p>
<p>Path to vulnerable library: /qa-tech-challenge/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- mocha-framework-7.16.1.tgz (Root Library)
- :x: **mocha-9.1.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is regular Expression Denial of Service (ReDoS) vulnerability in mocha.
It allows cause a denial of service when stripping crafted invalid function definition from strs.
<p>Publish Date: 2021-09-18
<p>URL: <a href=https://github.com/mochajs/mocha/commit/61b4b9209c2c64b32c8d48b1761c3b9384d411ea>WS-2021-0638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/1d8a3d95-d199-4129-a6ad-8eafe5e77b9e/">https://huntr.dev/bounties/1d8a3d95-d199-4129-a6ad-8eafe5e77b9e/</a></p>
<p>Release Date: 2021-09-18</p>
<p>Fix Resolution: https://github.com/mochajs/mocha/commit/61b4b9209c2c64b32c8d48b1761c3b9384d411ea</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in mocha tgz ws high severity vulnerability vulnerable library mocha tgz simple flexible fun test framework library home page a href path to dependency file qa tech challenge package json path to vulnerable library qa tech challenge node modules mocha package json dependency hierarchy mocha framework tgz root library x mocha tgz vulnerable library found in base branch main vulnerability details there is regular expression denial of service redos vulnerability in mocha it allows cause a denial of service when stripping crafted invalid function definition from strs publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
735,437
| 25,397,573,433
|
IssuesEvent
|
2022-11-22 09:46:09
|
kubernetes/website
|
https://api.github.com/repos/kubernetes/website
|
closed
|
Add examples for creating an object to new-style API reference
|
kind/feature priority/backlog lifecycle/rotten language/en triage/accepted
|
A few examples to use API methods will be helpful. Documentation is missing the examples.
Example page: https://k8s.io/docs/reference/kubernetes-api/cluster-resources/node-v1/
|
1.0
|
Add examples for creating an object to new-style API reference - A few examples to use API methods will be helpful. Documentation is missing the examples.
Example page: https://k8s.io/docs/reference/kubernetes-api/cluster-resources/node-v1/
|
non_defect
|
add examples for creating an object to new style api reference a few examples to use api methods will be helpful documentation is missing the examples example page
| 0
|
501,905
| 14,536,325,067
|
IssuesEvent
|
2020-12-15 07:25:33
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
200 Response Code When Exception is Thrown During Bootstrap
|
Component: DB Fixed in 2.4.x Issue: Confirmed Priority: P3 Progress: PR in progress Reproduced on 2.4.x Severity: S3
|
<!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Version 2.4.1
2. PHP 7.4
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Change your DB host to something wrong.
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. An error message is displayed and a 500 response code is returned.
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. An error message is displayed and the http response code is 200 OK.
Very annoying for probes who think the pod is OK & Running...
### Additiona; info from Engcom
full env.php (Removed crypt key) :
```
<?php
return [
'backend' => [
'frontName' => 'admin',
],
'queue' => [
'consumers_wait_for_messages' => 1,
],
'crypt' => [
'key' => '****',
],
'db' => [
'table_prefix' => '',
'connection' => [
'default' => [
'host' => 'nonexisting',
'dbname' => 'whatever',
'username' => 'root',
'password' => 'root',
'model' => 'mysql4',
'engine' => 'innodb',
'initStatements' => 'SET NAMES utf8;',
'active' => '1',
'driver_options' => [
1014 => false,
],
],
],
],
'resource' => [
'default_setup' => [
'connection' => 'default',
],
],
'x-frame-options' => 'SAMEORIGIN',
'MAGE_MODE' => 'production',
'session' => [
'save' => 'files',
],
'cache' => [
'frontend' => [
'default' => [
'id_prefix' => '707_',
],
'page_cache' => [
'id_prefix' => '707_',
],
],
'allow_parallel_generation' => false,
],
'lock' => [
'provider' => 'db',
'config' => [
'prefix' => '',
],
],
'cache_types' => [
'config' => 0,
'layout' => 0,
'block_html' => 0,
'collections' => 0,
'reflection' => 0,
'db_ddl' => 0,
'compiled_config' => 1,
'eav' => 0,
'customer_notification' => 0,
'config_integration' => 0,
'config_integration_api' => 0,
'full_page' => 0,
'config_webservice' => 0,
'translate' => 0,
'vertex' => 0,
],
'downloadable_domains' => [
],
'install' => [
'date' => 'Fri, 09 Oct 2020 12:13:31 +0000',
],
];
```
I havn't any extra module enabled.

|
1.0
|
200 Response Code When Exception is Thrown During Bootstrap - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Version 2.4.1
2. PHP 7.4
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Change your DB host to something wrong.
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. An error message is displayed and a 500 response code is returned.
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. An error message is displayed and the http response code is 200 OK.
Very annoying for probes who think the pod is OK & Running...
### Additiona; info from Engcom
full env.php (Removed crypt key) :
```
<?php
return [
'backend' => [
'frontName' => 'admin',
],
'queue' => [
'consumers_wait_for_messages' => 1,
],
'crypt' => [
'key' => '****',
],
'db' => [
'table_prefix' => '',
'connection' => [
'default' => [
'host' => 'nonexisting',
'dbname' => 'whatever',
'username' => 'root',
'password' => 'root',
'model' => 'mysql4',
'engine' => 'innodb',
'initStatements' => 'SET NAMES utf8;',
'active' => '1',
'driver_options' => [
1014 => false,
],
],
],
],
'resource' => [
'default_setup' => [
'connection' => 'default',
],
],
'x-frame-options' => 'SAMEORIGIN',
'MAGE_MODE' => 'production',
'session' => [
'save' => 'files',
],
'cache' => [
'frontend' => [
'default' => [
'id_prefix' => '707_',
],
'page_cache' => [
'id_prefix' => '707_',
],
],
'allow_parallel_generation' => false,
],
'lock' => [
'provider' => 'db',
'config' => [
'prefix' => '',
],
],
'cache_types' => [
'config' => 0,
'layout' => 0,
'block_html' => 0,
'collections' => 0,
'reflection' => 0,
'db_ddl' => 0,
'compiled_config' => 1,
'eav' => 0,
'customer_notification' => 0,
'config_integration' => 0,
'config_integration_api' => 0,
'full_page' => 0,
'config_webservice' => 0,
'translate' => 0,
'vertex' => 0,
],
'downloadable_domains' => [
],
'install' => [
'date' => 'Fri, 09 Oct 2020 12:13:31 +0000',
],
];
```
I havn't any extra module enabled.

|
non_defect
|
response code when exception is thrown during bootstrap please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions provide the exact magento version example and any important information on the environment where bug is reproducible version php steps to reproduce important provide a set of clear steps to reproduce this bug we can not provide support without clear instructions on how to reproduce change your db host to something wrong expected result an error message is displayed and a response code is returned actual result an error message is displayed and the http response code is ok very annoying for probes who think the pod is ok running additiona info from engcom full env php removed crypt key php return backend frontname admin queue consumers wait for messages crypt key db table prefix connection default host nonexisting dbname whatever username root password root model engine innodb initstatements set names active driver options false resource default setup connection default x frame options sameorigin mage mode production session save files cache frontend default id prefix page cache id prefix allow parallel generation false lock provider db config prefix cache types config layout block html collections reflection db ddl compiled config eav customer notification config integration config integration api full page config webservice translate vertex downloadable domains install date fri oct i havn t any extra module enabled
| 0
|
616,658
| 19,309,189,620
|
IssuesEvent
|
2021-12-13 14:39:53
|
oncokb/oncokb
|
https://api.github.com/repos/oncokb/oncokb
|
opened
|
null consequence on variant 17:g.59926628T>C
|
bug high priority
|
The variant is supposed to have `intro_variant` consequence.
|
1.0
|
null consequence on variant 17:g.59926628T>C - The variant is supposed to have `intro_variant` consequence.
|
non_defect
|
null consequence on variant g c the variant is supposed to have intro variant consequence
| 0
|
72,376
| 24,085,070,806
|
IssuesEvent
|
2022-09-19 10:10:09
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: Unable to find an exact match for CDP version 105, so returning the closest version found: 104
|
I-defect needs-triaging
|
### What happened?
**Selenium version:** 4.4.0
**Actual result:** Unable to find an exact match for CDP version 105, so returning the closest version found: 104

### How can we reproduce the issue?
```shell
WebDriver driver = new ChromeDriver();
openBrowser(driver,"http://192.168.15.237/bagisto-demo/public/");
String[] cartProducts = {"Men's Polo T-shirt","Sunglasses"}; // products for add to cart.
addToCart(driver,cartProducts); // add-Product-to-cart
public static void addToCart(WebDriver driver, String[] cartProducts) {
int j=0;
List<WebElement> products = driver.findElements(By.cssSelector("div.card-body"));
for (int i=1;i<products.size();i++) {
List itemForAddToCart = Arrays.asList(cartProducts);
if (itemForAddToCart.contains(products)) {
j++;
driver.findElement(By.xpath("//button[@class='btn btn-add-to-cart.small-padding']")).click();
}
if(j == cartProducts.length) {
break;
}
}
}
```
### Relevant log output
```shell
Sep 19, 2022 3:20:04 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected upstream dialect: W3C
Sep 19, 2022 3:20:05 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
WARNING: Unable to find an exact match for CDP version 105, so returning the closest version found: 104
Sep 19, 2022 3:20:05 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
INFO: Found CDP implementation for version 105 of 104
```
### Operating System
ubunt 64bit
### Selenium version
java: 18.0.2.1, selenium: 4.4.0
### What are the browser(s) and version(s) where you see this issue?
ChromeDriver, v.105.0.5195.125
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver, v.105.0.5195.125
### Are you using Selenium Grid?
no
|
1.0
|
[🐛 Bug]: Unable to find an exact match for CDP version 105, so returning the closest version found: 104 - ### What happened?
**Selenium version:** 4.4.0
**Actual result:** Unable to find an exact match for CDP version 105, so returning the closest version found: 104

### How can we reproduce the issue?
```shell
WebDriver driver = new ChromeDriver();
openBrowser(driver,"http://192.168.15.237/bagisto-demo/public/");
String[] cartProducts = {"Men's Polo T-shirt","Sunglasses"}; // products for add to cart.
addToCart(driver,cartProducts); // add-Product-to-cart
public static void addToCart(WebDriver driver, String[] cartProducts) {
int j=0;
List<WebElement> products = driver.findElements(By.cssSelector("div.card-body"));
for (int i=1;i<products.size();i++) {
List itemForAddToCart = Arrays.asList(cartProducts);
if (itemForAddToCart.contains(products)) {
j++;
driver.findElement(By.xpath("//button[@class='btn btn-add-to-cart.small-padding']")).click();
}
if(j == cartProducts.length) {
break;
}
}
}
```
### Relevant log output
```shell
Sep 19, 2022 3:20:04 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected upstream dialect: W3C
Sep 19, 2022 3:20:05 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
WARNING: Unable to find an exact match for CDP version 105, so returning the closest version found: 104
Sep 19, 2022 3:20:05 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
INFO: Found CDP implementation for version 105 of 104
```
### Operating System
ubunt 64bit
### Selenium version
java: 18.0.2.1, selenium: 4.4.0
### What are the browser(s) and version(s) where you see this issue?
ChromeDriver, v.105.0.5195.125
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver, v.105.0.5195.125
### Are you using Selenium Grid?
no
|
defect
|
unable to find an exact match for cdp version so returning the closest version found what happened selenium version actual result unable to find an exact match for cdp version so returning the closest version found how can we reproduce the issue shell webdriver driver new chromedriver openbrowser driver string cartproducts men s polo t shirt sunglasses products for add to cart addtocart driver cartproducts add product to cart public static void addtocart webdriver driver string cartproducts int j list products driver findelements by cssselector div card body for int i i products size i list itemforaddtocart arrays aslist cartproducts if itemforaddtocart contains products j driver findelement by xpath button click if j cartproducts length break relevant log output shell sep pm org openqa selenium remote protocolhandshake createsession info detected upstream dialect sep pm org openqa selenium devtools cdpversionfinder findnearestmatch warning unable to find an exact match for cdp version so returning the closest version found sep pm org openqa selenium devtools cdpversionfinder findnearestmatch info found cdp implementation for version of operating system ubunt selenium version java selenium what are the browser s and version s where you see this issue chromedriver v what are the browser driver s and version s where you see this issue chromedriver v are you using selenium grid no
| 1
|
352,802
| 25,082,880,040
|
IssuesEvent
|
2022-11-07 20:56:45
|
hashicorp/terraform-provider-aws
|
https://api.github.com/repos/hashicorp/terraform-provider-aws
|
closed
|
[Docs]: Fix Broken/Cluttered RDS Tutorial Link
|
documentation needs-triage
|
### Documentation Link
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance
### Description
The documentation has the following paragraph referring users to the tutorial section:
> Hands-on: Try the [Manage AWS RDS Instances](https://learn.hashicorp.com/tutorials/terraform/aws-rds?in=terraform/modules&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial on HashiCorp Learn.
For me this results in a dead/broken link and the page telling me "We couldn't find the page you're looking for.", but if I remove all the clutter from the URL and enter just the base `https://learn.hashicorp.com/tutorials/terraform/aws-rds` I successfully land at the desired tutorial page.
I suggest you remove the following part from the URL:
`?in=terraform/modules&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS`
### References
_No response_
### Would you like to implement a fix?
_No response_
|
1.0
|
[Docs]: Fix Broken/Cluttered RDS Tutorial Link - ### Documentation Link
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance
### Description
The documentation has the following paragraph referring users to the tutorial section:
> Hands-on: Try the [Manage AWS RDS Instances](https://learn.hashicorp.com/tutorials/terraform/aws-rds?in=terraform/modules&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial on HashiCorp Learn.
For me this results in a dead/broken link and the page telling me "We couldn't find the page you're looking for.", but if I remove all the clutter from the URL and enter just the base `https://learn.hashicorp.com/tutorials/terraform/aws-rds` I successfully land at the desired tutorial page.
I suggest you remove the following part from the URL:
`?in=terraform/modules&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS`
### References
_No response_
### Would you like to implement a fix?
_No response_
|
non_defect
|
fix broken cluttered rds tutorial link documentation link description the documentation has the following paragraph referring users to the tutorial section hands on try the tutorial on hashicorp learn for me this results in a dead broken link and the page telling me we couldn t find the page you re looking for but if i remove all the clutter from the url and enter just the base i successfully land at the desired tutorial page i suggest you remove the following part from the url in terraform modules utm source website utm medium web io utm offer article page utm content docs references no response would you like to implement a fix no response
| 0
|
168,435
| 6,375,307,568
|
IssuesEvent
|
2017-08-02 02:23:10
|
minio/minio-go
|
https://api.github.com/repos/minio/minio-go
|
closed
|
S3 server at wasabi.com doesn't support streaming uploads from minio
|
priority: medium triage working as intended
|
Hi,
The S3 server at wasabi.com seems to not support streaming uploads from minio.
The upload succeeds, but, when downloading the file, it has an actually downloaded content like
```
1df;chunk-signature=0fe0ec35__________________(redacted)____________________b4bd5f30
my content here...
0;chunk-signature=eac700d9__________________(redacted)____________________b6091939
```
It probably would work fine if i used minio's non-streaming API instead.
But, is there anything minio can do? Somehow detect whether streaming is supported by this server, or, throw an error if the data is not streamed successfully?
Regards
mappu
CC @wasabi-tech @jcflowers
|
1.0
|
S3 server at wasabi.com doesn't support streaming uploads from minio - Hi,
The S3 server at wasabi.com seems to not support streaming uploads from minio.
The upload succeeds, but, when downloading the file, it has an actually downloaded content like
```
1df;chunk-signature=0fe0ec35__________________(redacted)____________________b4bd5f30
my content here...
0;chunk-signature=eac700d9__________________(redacted)____________________b6091939
```
It probably would work fine if i used minio's non-streaming API instead.
But, is there anything minio can do? Somehow detect whether streaming is supported by this server, or, throw an error if the data is not streamed successfully?
Regards
mappu
CC @wasabi-tech @jcflowers
|
non_defect
|
server at wasabi com doesn t support streaming uploads from minio hi the server at wasabi com seems to not support streaming uploads from minio the upload succeeds but when downloading the file it has an actually downloaded content like chunk signature redacted my content here chunk signature redacted it probably would work fine if i used minio s non streaming api instead but is there anything minio can do somehow detect whether streaming is supported by this server or throw an error if the data is not streamed successfully regards mappu cc wasabi tech jcflowers
| 0
|
250,775
| 21,335,646,687
|
IssuesEvent
|
2022-04-18 14:16:51
|
hoppscotch/hoppscotch
|
https://api.github.com/repos/hoppscotch/hoppscotch
|
reopened
|
[bug]: synchronize history message keeps showing up
|
bug need testing
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
Every time I log in to hop and use the REST tool, the sync history message keeps popping up. In the video I show the case.
https://user-images.githubusercontent.com/56084970/162647659-9a2995da-34aa-418c-ba9b-83c4207ca991.mp4
### Steps to reproduce
1. Open hoppscotch
2. Go to tool REST
3. See error
### Environment
Production
### Version
Cloud
|
1.0
|
[bug]: synchronize history message keeps showing up - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
Every time I log in to hop and use the REST tool, the sync history message keeps popping up. In the video I show the case.
https://user-images.githubusercontent.com/56084970/162647659-9a2995da-34aa-418c-ba9b-83c4207ca991.mp4
### Steps to reproduce
1. Open hoppscotch
2. Go to tool REST
3. See error
### Environment
Production
### Version
Cloud
|
non_defect
|
synchronize history message keeps showing up is there an existing issue for this i have searched the existing issues current behavior every time i log in to hop and use the rest tool the sync history message keeps popping up in the video i show the case steps to reproduce open hoppscotch go to tool rest see error environment production version cloud
| 0
|
55,715
| 14,654,132,422
|
IssuesEvent
|
2020-12-28 07:55:51
|
SAP/fundamental-ngx
|
https://api.github.com/repos/SAP/fundamental-ngx
|
closed
|
Bug: (Core) Toolbar – Doesn't work correct with a lot of items and shouldOverflow option
|
Defect Hunting bug core denoland
|
#### Is this a bug, enhancement, or feature request?
Bug
#### Briefly describe your proposal.
Toolbar with `shouldOverflow` doesn't refresh view after changes `toolbar-item`.
Only resize window trigged.
Also when we have a lot of clamped items, we don't have scroll in popover or on the page in general.
Please see screenshot

#### If this is a bug, please provide steps for reproducing it.
Add/remove items.
Please see example on [stackblitz](https://stackblitz.com/edit/toolbar-shouldoverflow-issue).
#### Please provide relevant source code if applicable.
https://stackblitz.com/edit/toolbar-shouldoverflow-issue
#### Is there anything else we should know?
Also, It would be great if developer have access to refresh toolbar state.
|
1.0
|
Bug: (Core) Toolbar – Doesn't work correct with a lot of items and shouldOverflow option - #### Is this a bug, enhancement, or feature request?
Bug
#### Briefly describe your proposal.
Toolbar with `shouldOverflow` doesn't refresh view after changes `toolbar-item`.
Only resize window trigged.
Also when we have a lot of clamped items, we don't have scroll in popover or on the page in general.
Please see screenshot

#### If this is a bug, please provide steps for reproducing it.
Add/remove items.
Please see example on [stackblitz](https://stackblitz.com/edit/toolbar-shouldoverflow-issue).
#### Please provide relevant source code if applicable.
https://stackblitz.com/edit/toolbar-shouldoverflow-issue
#### Is there anything else we should know?
Also, It would be great if developer have access to refresh toolbar state.
|
defect
|
bug core toolbar – doesn t work correct with a lot of items and shouldoverflow option is this a bug enhancement or feature request bug briefly describe your proposal toolbar with shouldoverflow doesn t refresh view after changes toolbar item only resize window trigged also when we have a lot of clamped items we don t have scroll in popover or on the page in general please see screenshot if this is a bug please provide steps for reproducing it add remove items please see example on please provide relevant source code if applicable is there anything else we should know also it would be great if developer have access to refresh toolbar state
| 1
|
68,630
| 21,770,097,537
|
IssuesEvent
|
2022-05-13 08:13:34
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Emoji is clipped at the top in the composer
|
T-Defect X-Regression S-Minor A-Composer O-Frequent
|
### Steps to reproduce
<img width="289" alt="Screen Shot 2022-05-13 at 10 12 05" src="https://user-images.githubusercontent.com/769871/168240568-1386b0fc-8593-4306-bead-c543f82d7b06.png">
### Outcome
#### What did you expect?
For the emoji not to be clipped
#### What happened instead?
See the screenshot
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Emoji is clipped at the top in the composer - ### Steps to reproduce
<img width="289" alt="Screen Shot 2022-05-13 at 10 12 05" src="https://user-images.githubusercontent.com/769871/168240568-1386b0fc-8593-4306-bead-c543f82d7b06.png">
### Outcome
#### What did you expect?
For the emoji not to be clipped
#### What happened instead?
See the screenshot
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
emoji is clipped at the top in the composer steps to reproduce img width alt screen shot at src outcome what did you expect for the emoji not to be clipped what happened instead see the screenshot operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no
| 1
|
18,442
| 3,061,294,903
|
IssuesEvent
|
2015-08-15 11:33:41
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Record.getValue(Field) returns wrong value if ambiguous column names are contained in the record, and the schema name is not present in the argument
|
C: Functionality P: Medium R: Fixed T: Defect
|
When fetching data from the database that contains ambiguous column names, e.g.
```sql
select b.id, a.id
from t_author a
join t_book b
on a.id = b.author_id
order by b.id, a.id
```
... resulting in
```
+----+----+
| id| id|
+----+----+
| 1| 1|
| 2| 1|
| 3| 2|
| 4| 2|
+----+----+
```
Then the `Record.getValue(Field)` or `Result.getValue(Field)` method doesn't work correctly, if the schema name isn't present in both:
- The record's field
- The argument's field
E.g., since #4283, the schema name is fetched from those JDBC drivers that support it, but not from PostgreSQL. When getting the value via `record.getValue(field(name("public", "t_author", "id")))` in PostgreSQL, the value is wrong (that of `t_book.id`).
Conversely, when using generated tables / columns, the schema is always present. But when getting the value via `record.getValue(field(name("t_author", "id")))`, the value is again wrong.
----
See also: #4283
|
1.0
|
Record.getValue(Field) returns wrong value if ambiguous column names are contained in the record, and the schema name is not present in the argument - When fetching data from the database that contains ambiguous column names, e.g.
```sql
select b.id, a.id
from t_author a
join t_book b
on a.id = b.author_id
order by b.id, a.id
```
... resulting in
```
+----+----+
| id| id|
+----+----+
| 1| 1|
| 2| 1|
| 3| 2|
| 4| 2|
+----+----+
```
Then the `Record.getValue(Field)` or `Result.getValue(Field)` method doesn't work correctly, if the schema name isn't present in both:
- The record's field
- The argument's field
E.g., since #4283, the schema name is fetched from those JDBC drivers that support it, but not from PostgreSQL. When getting the value via `record.getValue(field(name("public", "t_author", "id")))` in PostgreSQL, the value is wrong (that of `t_book.id`).
Conversely, when using generated tables / columns, the schema is always present. But when getting the value via `record.getValue(field(name("t_author", "id")))`, the value is again wrong.
----
See also: #4283
|
defect
|
record getvalue field returns wrong value if ambiguous column names are contained in the record and the schema name is not present in the argument when fetching data from the database that contains ambiguous column names e g sql select b id a id from t author a join t book b on a id b author id order by b id a id resulting in id id then the record getvalue field or result getvalue field method doesn t work correctly if the schema name isn t present in both the record s field the argument s field e g since the schema name is fetched from those jdbc drivers that support it but not from postgresql when getting the value via record getvalue field name public t author id in postgresql the value is wrong that of t book id conversely when using generated tables columns the schema is always present but when getting the value via record getvalue field name t author id the value is again wrong see also
| 1
|
103,570
| 8,921,998,208
|
IssuesEvent
|
2019-01-21 11:40:30
|
pando-project/jerryscript
|
https://api.github.com/repos/pando-project/jerryscript
|
closed
|
test-api.c has an out-of-bounds write (buffer overflow)
|
bug test
|
Reproducing steps:
1. I use my Stensal SDK (https://stensal.com)
2. build jerryscript with stensal-c
3. Run ./build/tests/unit-test-api
This is what I got:
ok 148343051 148341491 0xfff0e4a8 2
ok construct 148343083 148343163 0xfff11bec 1
ok 148343251 148341491 0xfff0e4b4 0
ok object free callback
DTS_MSG: Stensal DTS detected a fatal program error!
DTS_MSG: Continuing the execution will cause unexpected behaviors, abort!
DTS_MSG: OOB Write:writing 1 bytes at 0xfff11570 will corrupt the adjacent data.
DTS_MSG: Diagnostic information:
-
- The object to-be-written (start:0xfff1156c, size:4 bytes) is allocated at
- file:/home/sbuilder/workspace/jerryscript/tests/unit-core/test-api.c::881, 10
- 0xfff1156c 0xfff1156f
- +------------------------+
- |the object to-be-written|......
- +------------------------+
- ^~~~~~~~~~
- the write starts at 0xfff11570 that is right after the object end.
- Stack trace (most recent call first):
-[1] file:/home/sbuilder/workspace/jerryscript/tests/unit-core/test-api.c::884, 5
-[2] file:/home/nwang/acore/musl/src/env/__libc_start_main.c::180, 11
|
1.0
|
test-api.c has an out-of-bounds write (buffer overflow) - Reproducing steps:
1. I use my Stensal SDK (https://stensal.com)
2. build jerryscript with stensal-c
3. Run ./build/tests/unit-test-api
This is what I got:
ok 148343051 148341491 0xfff0e4a8 2
ok construct 148343083 148343163 0xfff11bec 1
ok 148343251 148341491 0xfff0e4b4 0
ok object free callback
DTS_MSG: Stensal DTS detected a fatal program error!
DTS_MSG: Continuing the execution will cause unexpected behaviors, abort!
DTS_MSG: OOB Write:writing 1 bytes at 0xfff11570 will corrupt the adjacent data.
DTS_MSG: Diagnostic information:
-
- The object to-be-written (start:0xfff1156c, size:4 bytes) is allocated at
- file:/home/sbuilder/workspace/jerryscript/tests/unit-core/test-api.c::881, 10
- 0xfff1156c 0xfff1156f
- +------------------------+
- |the object to-be-written|......
- +------------------------+
- ^~~~~~~~~~
- the write starts at 0xfff11570 that is right after the object end.
- Stack trace (most recent call first):
-[1] file:/home/sbuilder/workspace/jerryscript/tests/unit-core/test-api.c::884, 5
-[2] file:/home/nwang/acore/musl/src/env/__libc_start_main.c::180, 11
|
non_defect
|
test api c has an out of bounds write buffer overflow reproducing steps i use my stensal sdk build jerryscript with stensal c run build tests unit test api this is what i got ok ok construct ok ok object free callback dts msg stensal dts detected a fatal program error dts msg continuing the execution will cause unexpected behaviors abort dts msg oob write writing bytes at will corrupt the adjacent data dts msg diagnostic information the object to be written start size bytes is allocated at file home sbuilder workspace jerryscript tests unit core test api c the object to be written the write starts at that is right after the object end stack trace most recent call first file home sbuilder workspace jerryscript tests unit core test api c file home nwang acore musl src env libc start main c
| 0
|
418,011
| 28,113,261,643
|
IssuesEvent
|
2023-03-31 08:51:28
|
jaredoong/ped
|
https://api.github.com/repos/jaredoong/ped
|
opened
|
Incorrect link and description in Quick start section of user guide
|
severity.Low type.DocumentationBug
|

Would be good to update the link and name of the program to ensure that a totally new user would be able to find the program.
<!--session: 1680252471869-a4ff2ce6-21c8-4fa9-ad69-6bc92f657b80-->
<!--Version: Web v3.4.7-->
|
1.0
|
Incorrect link and description in Quick start section of user guide - 
Would be good to update the link and name of the program to ensure that a totally new user would be able to find the program.
<!--session: 1680252471869-a4ff2ce6-21c8-4fa9-ad69-6bc92f657b80-->
<!--Version: Web v3.4.7-->
|
non_defect
|
incorrect link and description in quick start section of user guide would be good to update the link and name of the program to ensure that a totally new user would be able to find the program
| 0
|
55,549
| 14,538,872,908
|
IssuesEvent
|
2020-12-15 11:04:13
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Intel QAT compression fails with ZFS 2.0.0
|
Status: Triage Needed Type: Defect
|
System information:
Distribution Centos 8.2
Stock kernel:
```
[root@dellqat zfs_latest]# uname -a
Linux dellqat 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@dellqat zfs_latest]# modinfo zfs | grep -iw version
version: 2.0.0-1
[root@dellqat zfs_latest]# modinfo spl | grep -iw version
version: 2.0.0-1
```
ZFS 2.0.0 fails to use Intel QAT compression.
QAT and ZFS compiled from sources:
QAT:
qat1.7.l.4.11.0-00001 (latest from intel)
```
./configure --enable-icp-trace --enable-icp-debug --enable-icp-log-syslog --enable-kapi
make
make install
```
ZFS:
[root@dellqat zfs_latest]# git status
On branch zfs-2.0-release
Your branch is up to date with 'origin/zfs-2.0-release'.
nothing to commit, working tree clean
```
export ICP_ROOT=/opt/A3C/qat1.7.l.4.11.0-00001
./configure --with-qat=/opt/A3C/qat1.7.l.4.11.0-00001
make
make install
ldconfig
```
(tried also various reboots...)
```
[root@dellqat ~]# lsmod | grep qat
qat_api 634880 2 zfs
qat_dh895xcc 20480 0
intel_qat 249856 3 qat_api,usdm_drv,qat_dh895xcc
uio 20480 1 intel_qat
[root@dellqat ~]# lsmod | grep zfs
zfs 4481024 1
zunicode 335872 1 zfs
zzstd 507904 1 zfs
qat_api 634880 2 zfs
zlua 176128 1 zfs
zcommon 94208 1 zfs
znvpair 90112 2 zfs,zcommon
zavl 16384 1 zfs
icp 323584 1 zfs
spl 110592 6 zfs,icp,zzstd,znvpair,zcommon,zavl
```
`zpool create -f -m /mnt/test test /dev/sdb /dev/sdc /dev/sdd
`
```
[root@dellqat ~]# zfs set compression=gzip test
[root@dellqat ~]# zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test creation mar dic 15 10:58 2020 -
test used 333K -
test available 2.63T -
test referenced 24K -
test compressratio 1.00x -
test mounted yes -
test quota none default
test reservation none default
test recordsize 128K default
test mountpoint /mnt/test local
test sharenfs off default
test checksum on default
test compression gzip local
test atime on default
test devices on default
test exec on default
test setuid on default
test readonly off default
test zoned off default
test snapdir hidden default
test aclmode discard default
test aclinherit restricted default
test createtxg 1 -
test canmount on default
test xattr on default
test copies 1 default
test version 5 -
test utf8only off -
test normalization none -
test casesensitivity sensitive -
test vscan off default
test nbmand off default
test sharesmb off default
test refquota none default
test refreservation none default
test guid 3091558646933016362 -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0B -
test usedbydataset 24K -
test usedbychildren 309K -
test usedbyrefreservation 0B -
test logbias latency default
test objsetid 54 -
test dedup off default
test mlslabel none default
test sync standard default
test dnodesize legacy default
test refcompressratio 1.00x -
test written 24K -
test logicalused 115K -
test logicalreferenced 12K -
test volmode default default
test filesystem_limit none default
test snapshot_limit none default
test filesystem_count none default
test snapshot_count none default
test snapdev hidden default
test acltype off default
test context none default
test fscontext none default
test defcontext none default
test rootcontext none default
test relatime off default
test redundant_metadata all default
test overlay on default
test encryption off default
test keylocation none default
test keyformat none default
test pbkdf2iters 0 default
test special_small_blocks 0 default
[root@dellqat /]# cat /proc/spl/kstat/zfs/qat
20 1 0x01 17 4624 14088835193 61490940492455
name type data
comp_requests 4 0
comp_total_in_bytes 4 0
comp_total_out_bytes 4 0
decomp_requests 4 0
decomp_total_in_bytes 4 0
decomp_total_out_bytes 4 0
dc_fails 4 0
encrypt_requests 4 0
encrypt_total_in_bytes 4 0
encrypt_total_out_bytes 4 0
decrypt_requests 4 0
decrypt_total_in_bytes 4 0
decrypt_total_out_bytes 4 0
crypt_fails 4 0
cksum_requests 4 0
cksum_total_in_bytes 4 0
cksum_fails 4 0
```
Tested with various methods: dd, standard cp & mv, iozone
Please note the strange:
test compressratio 1.00x -
Seems compression is not there at all.
|
1.0
|
Intel QAT compression fails with ZFS 2.0.0 - System information:
Distribution Centos 8.2
Stock kernel:
```
[root@dellqat zfs_latest]# uname -a
Linux dellqat 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@dellqat zfs_latest]# modinfo zfs | grep -iw version
version: 2.0.0-1
[root@dellqat zfs_latest]# modinfo spl | grep -iw version
version: 2.0.0-1
```
ZFS 2.0.0 fails to use Intel QAT compression.
QAT and ZFS compiled from sources:
QAT:
qat1.7.l.4.11.0-00001 (latest from intel)
```
./configure --enable-icp-trace --enable-icp-debug --enable-icp-log-syslog --enable-kapi
make
make install
```
ZFS:
[root@dellqat zfs_latest]# git status
On branch zfs-2.0-release
Your branch is up to date with 'origin/zfs-2.0-release'.
nothing to commit, working tree clean
```
export ICP_ROOT=/opt/A3C/qat1.7.l.4.11.0-00001
./configure --with-qat=/opt/A3C/qat1.7.l.4.11.0-00001
make
make install
ldconfig
```
(tried also various reboots...)
```
[root@dellqat ~]# lsmod | grep qat
qat_api 634880 2 zfs
qat_dh895xcc 20480 0
intel_qat 249856 3 qat_api,usdm_drv,qat_dh895xcc
uio 20480 1 intel_qat
[root@dellqat ~]# lsmod | grep zfs
zfs 4481024 1
zunicode 335872 1 zfs
zzstd 507904 1 zfs
qat_api 634880 2 zfs
zlua 176128 1 zfs
zcommon 94208 1 zfs
znvpair 90112 2 zfs,zcommon
zavl 16384 1 zfs
icp 323584 1 zfs
spl 110592 6 zfs,icp,zzstd,znvpair,zcommon,zavl
```
`zpool create -f -m /mnt/test test /dev/sdb /dev/sdc /dev/sdd
`
```
[root@dellqat ~]# zfs set compression=gzip test
[root@dellqat ~]# zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test creation mar dic 15 10:58 2020 -
test used 333K -
test available 2.63T -
test referenced 24K -
test compressratio 1.00x -
test mounted yes -
test quota none default
test reservation none default
test recordsize 128K default
test mountpoint /mnt/test local
test sharenfs off default
test checksum on default
test compression gzip local
test atime on default
test devices on default
test exec on default
test setuid on default
test readonly off default
test zoned off default
test snapdir hidden default
test aclmode discard default
test aclinherit restricted default
test createtxg 1 -
test canmount on default
test xattr on default
test copies 1 default
test version 5 -
test utf8only off -
test normalization none -
test casesensitivity sensitive -
test vscan off default
test nbmand off default
test sharesmb off default
test refquota none default
test refreservation none default
test guid 3091558646933016362 -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0B -
test usedbydataset 24K -
test usedbychildren 309K -
test usedbyrefreservation 0B -
test logbias latency default
test objsetid 54 -
test dedup off default
test mlslabel none default
test sync standard default
test dnodesize legacy default
test refcompressratio 1.00x -
test written 24K -
test logicalused 115K -
test logicalreferenced 12K -
test volmode default default
test filesystem_limit none default
test snapshot_limit none default
test filesystem_count none default
test snapshot_count none default
test snapdev hidden default
test acltype off default
test context none default
test fscontext none default
test defcontext none default
test rootcontext none default
test relatime off default
test redundant_metadata all default
test overlay on default
test encryption off default
test keylocation none default
test keyformat none default
test pbkdf2iters 0 default
test special_small_blocks 0 default
[root@dellqat /]# cat /proc/spl/kstat/zfs/qat
20 1 0x01 17 4624 14088835193 61490940492455
name type data
comp_requests 4 0
comp_total_in_bytes 4 0
comp_total_out_bytes 4 0
decomp_requests 4 0
decomp_total_in_bytes 4 0
decomp_total_out_bytes 4 0
dc_fails 4 0
encrypt_requests 4 0
encrypt_total_in_bytes 4 0
encrypt_total_out_bytes 4 0
decrypt_requests 4 0
decrypt_total_in_bytes 4 0
decrypt_total_out_bytes 4 0
crypt_fails 4 0
cksum_requests 4 0
cksum_total_in_bytes 4 0
cksum_fails 4 0
```
Tested with various methods: dd, standard cp & mv, iozone
Please note the strange:
test compressratio 1.00x -
Seems compression is not there at all.
|
defect
|
intel qat compression fails with zfs system information distribution centos stock kernel uname a linux dellqat smp mon sep utc gnu linux modinfo zfs grep iw version version modinfo spl grep iw version version zfs fails to use intel qat compression qat and zfs compiled from sources qat l latest from intel configure enable icp trace enable icp debug enable icp log syslog enable kapi make make install zfs git status on branch zfs release your branch is up to date with origin zfs release nothing to commit working tree clean export icp root opt l configure with qat opt l make make install ldconfig tried also various reboots lsmod grep qat qat api zfs qat intel qat qat api usdm drv qat uio intel qat lsmod grep zfs zfs zunicode zfs zzstd zfs qat api zfs zlua zfs zcommon zfs znvpair zfs zcommon zavl zfs icp zfs spl zfs icp zzstd znvpair zcommon zavl zpool create f m mnt test test dev sdb dev sdc dev sdd zfs set compression gzip test zfs get all test name property value source test type filesystem test creation mar dic test used test available test referenced test compressratio test mounted yes test quota none default test reservation none default test recordsize default test mountpoint mnt test local test sharenfs off default test checksum on default test compression gzip local test atime on default test devices on default test exec on default test setuid on default test readonly off default test zoned off default test snapdir hidden default test aclmode discard default test aclinherit restricted default test createtxg test canmount on default test xattr on default test copies default test version test off test normalization none test casesensitivity sensitive test vscan off default test nbmand off default test sharesmb off default test refquota none default test refreservation none default test guid test primarycache all default test secondarycache all default test usedbysnapshots test usedbydataset test usedbychildren test usedbyrefreservation test logbias latency default test objsetid test dedup off default test mlslabel none default test sync standard default test dnodesize legacy default test refcompressratio test written test logicalused test logicalreferenced test volmode default default test filesystem limit none default test snapshot limit none default test filesystem count none default test snapshot count none default test snapdev hidden default test acltype off default test context none default test fscontext none default test defcontext none default test rootcontext none default test relatime off default test redundant metadata all default test overlay on default test encryption off default test keylocation none default test keyformat none default test default test special small blocks default cat proc spl kstat zfs qat name type data comp requests comp total in bytes comp total out bytes decomp requests decomp total in bytes decomp total out bytes dc fails encrypt requests encrypt total in bytes encrypt total out bytes decrypt requests decrypt total in bytes decrypt total out bytes crypt fails cksum requests cksum total in bytes cksum fails tested with various methods dd standard cp mv iozone please note the strange test compressratio seems compression is not there at all
| 1
|
25,803
| 4,461,179,900
|
IssuesEvent
|
2016-08-24 03:46:49
|
vug/freqazoid
|
https://api.github.com/repos/vug/freqazoid
|
closed
|
DFT function optimization
|
auto-migrated Priority-Medium Type-Defect
|
```
I used your DFT code in my stud project as a starting point and I optimized it
a bit for execution speed. I attached a patch for the DFT.java file in case you
are interested in using it.
```
Original issue reported on code.google.com by `kirm...@gmail.com` on 14 Jan 2012 at 4:04
Attachments:
* [dft_enhancement.patch](https://storage.googleapis.com/google-code-attachments/freqazoid/issue-13/comment-0/dft_enhancement.patch)
|
1.0
|
DFT function optimization - ```
I used your DFT code in my stud project as a starting point and I optimized it
a bit for execution speed. I attached a patch for the DFT.java file in case you
are interested in using it.
```
Original issue reported on code.google.com by `kirm...@gmail.com` on 14 Jan 2012 at 4:04
Attachments:
* [dft_enhancement.patch](https://storage.googleapis.com/google-code-attachments/freqazoid/issue-13/comment-0/dft_enhancement.patch)
|
defect
|
dft function optimization i used your dft code in my stud project as a starting point and i optimized it a bit for execution speed i attached a patch for the dft java file in case you are interested in using it original issue reported on code google com by kirm gmail com on jan at attachments
| 1
|
63,369
| 17,616,296,544
|
IssuesEvent
|
2021-08-18 10:07:04
|
hazelcast/hazelcast-python-client
|
https://api.github.com/repos/hazelcast/hazelcast-python-client
|
closed
|
Scaling down Hazelcast cluster causes Python client to lose connection
|
Type: Defect Priority: High
|
## Description
Scaling down Hazelcast cluster causes Python client to lose connection even with `redo_operation` set to `True`.
## Steps to reproduce
Create Hazelcast cluster with 2 members.
Run client program that connects and issues periodical requests, such as:
```
import hazelcast
import logging
import random
if __name__ == "__main__":
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
config = hazelcast.ClientConfig()
config.network_config.redo_operation = True
client = hazelcast.HazelcastClient(config)
my_map = client.get_map("map").blocking()
my_map.put("key", "value")
if my_map.get("key") == "value":
print("Connection Successful!")
print("Now, `map` will be filled with random entries.");
while True:
random_key = random.randint(1, 100000)
my_map.put("key" + str(random_key), "value" + str(random_key))
my_map.get("key" + str(random.randint(1,100000)))
if random_key % 10 == 0:
print("Map size:" + str(my_map.size()))
else:
raise Exception("Connection failed, check your configuration.")
client.shutdown()
```
Now scale the cluster up to 4 members. Connection will not be lost. Scale down to 2 members
again, an error will occur and the program will exit.
```
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[0](33.11.106.108:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:40:18 AM HazelcastClient.LifecycleService
INFO: [3.12.1] [guglielmo-net-04] [hz.client_0] (20190319 - 3b38a46) HazelcastClient is DISCONNECTED
INFO:HazelcastClient.LifecycleService:[guglielmo-net-04] [hz.client_0] (20190319 - 3b38a46) HazelcastClient is DISCONNECTED
Nov 13, 2019 10:40:18 AM HazelcastClient.Connection[1](33.11.118.111:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[1](33.11.118.111:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:40:18 AM HazelcastClient.ClusterService
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed to owner node. Trying to reconnect.
WARNING:HazelcastClient.ClusterService:[guglielmo-net-04] [hz.client_0] Connection closed to owner node. Trying to reconnect.
Nov 13, 2019 10:40:19 AM HazelcastClient.ClusterService
INFO: [3.12.1] [guglielmo-net-04] [hz.client_0] Connecting to Address(host=100.96.6.2, port=30023)
INFO:HazelcastClient.ClusterService:[guglielmo-net-04] [hz.client_0] Connecting to Address(host=100.96.6.2, port=30023)
Nov 13, 2019 10:41:39 AM HazelcastClient.Connection[2](33.11.101.6:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[2](33.11.101.6:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:41:39 AM HazelcastClient.Connection[3](33.11.120.0:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[3](33.11.120.0:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Traceback (most recent call last):
File "client.py", line 38, in <module>
my_map.put("key" + str(random_key), "value" + str(random_key))
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 274, in f
return result.result()
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 61, in result
six.reraise(self._exception.__class__, self._exception, self._traceback)
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 145, in callback
future.set_result(continuation_func(f, *args))
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/proxy/base.py", line 12, in default_response_handler
response = future.result()
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 61, in result
six.reraise(self._exception.__class__, self._exception, self._traceback)
File "<string>", line 3, in reraise
hazelcast.exception.TimeoutError: Request timed out after 120 seconds.
Exception in thread hazelcast-reactor (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 754, in run
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/reactor.py", line 47, in _loop
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'
```
|
1.0
|
Scaling down Hazelcast cluster causes Python client to lose connection - ## Description
Scaling down Hazelcast cluster causes Python client to lose connection even with `redo_operation` set to `True`.
## Steps to reproduce
Create Hazelcast cluster with 2 members.
Run client program that connects and issues periodical requests, such as:
```
import hazelcast
import logging
import random
if __name__ == "__main__":
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
config = hazelcast.ClientConfig()
config.network_config.redo_operation = True
client = hazelcast.HazelcastClient(config)
my_map = client.get_map("map").blocking()
my_map.put("key", "value")
if my_map.get("key") == "value":
print("Connection Successful!")
print("Now, `map` will be filled with random entries.");
while True:
random_key = random.randint(1, 100000)
my_map.put("key" + str(random_key), "value" + str(random_key))
my_map.get("key" + str(random.randint(1,100000)))
if random_key % 10 == 0:
print("Map size:" + str(my_map.size()))
else:
raise Exception("Connection failed, check your configuration.")
client.shutdown()
```
Now scale the cluster up to 4 members. Connection will not be lost. Scale down to 2 members
again, an error will occur and the program will exit.
```
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[0](33.11.106.108:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:40:18 AM HazelcastClient.LifecycleService
INFO: [3.12.1] [guglielmo-net-04] [hz.client_0] (20190319 - 3b38a46) HazelcastClient is DISCONNECTED
INFO:HazelcastClient.LifecycleService:[guglielmo-net-04] [hz.client_0] (20190319 - 3b38a46) HazelcastClient is DISCONNECTED
Nov 13, 2019 10:40:18 AM HazelcastClient.Connection[1](33.11.118.111:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[1](33.11.118.111:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:40:18 AM HazelcastClient.ClusterService
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed to owner node. Trying to reconnect.
WARNING:HazelcastClient.ClusterService:[guglielmo-net-04] [hz.client_0] Connection closed to owner node. Trying to reconnect.
Nov 13, 2019 10:40:19 AM HazelcastClient.ClusterService
INFO: [3.12.1] [guglielmo-net-04] [hz.client_0] Connecting to Address(host=100.96.6.2, port=30023)
INFO:HazelcastClient.ClusterService:[guglielmo-net-04] [hz.client_0] Connecting to Address(host=100.96.6.2, port=30023)
Nov 13, 2019 10:41:39 AM HazelcastClient.Connection[2](33.11.101.6:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[2](33.11.101.6:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Nov 13, 2019 10:41:39 AM HazelcastClient.Connection[3](33.11.120.0:30023)
WARNING: [3.12.1] [guglielmo-net-04] [hz.client_0] Connection closed by server
WARNING:HazelcastClient.Connection[3](33.11.120.0:30023):[guglielmo-net-04] [hz.client_0] Connection closed by server
Traceback (most recent call last):
File "client.py", line 38, in <module>
my_map.put("key" + str(random_key), "value" + str(random_key))
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 274, in f
return result.result()
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 61, in result
six.reraise(self._exception.__class__, self._exception, self._traceback)
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 145, in callback
future.set_result(continuation_func(f, *args))
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/proxy/base.py", line 12, in default_response_handler
response = future.result()
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/future.py", line 61, in result
six.reraise(self._exception.__class__, self._exception, self._traceback)
File "<string>", line 3, in reraise
hazelcast.exception.TimeoutError: Request timed out after 120 seconds.
Exception in thread hazelcast-reactor (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 754, in run
File "/home/ubuntu/.local/lib/python2.7/site-packages/hazelcast/reactor.py", line 47, in _loop
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'
```
|
defect
|
scaling down hazelcast cluster causes python client to lose connection description scaling down hazelcast cluster causes python client to lose connection even with redo operation set to true steps to reproduce create hazelcast cluster with members run client program that connects and issues periodical requests such as import hazelcast import logging import random if name main logging basicconfig logging getlogger setlevel logging info config hazelcast clientconfig config network config redo operation true client hazelcast hazelcastclient config my map client get map map blocking my map put key value if my map get key value print connection successful print now map will be filled with random entries while true random key random randint my map put key str random key value str random key my map get key str random randint if random key print map size str my map size else raise exception connection failed check your configuration client shutdown now scale the cluster up to members connection will not be lost scale down to members again an error will occur and the program will exit warning connection closed by server warning hazelcastclient connection connection closed by server nov am hazelcastclient lifecycleservice info hazelcastclient is disconnected info hazelcastclient lifecycleservice hazelcastclient is disconnected nov am hazelcastclient connection warning connection closed by server warning hazelcastclient connection connection closed by server nov am hazelcastclient clusterservice warning connection closed to owner node trying to reconnect warning hazelcastclient clusterservice connection closed to owner node trying to reconnect nov am hazelcastclient clusterservice info connecting to address host port info hazelcastclient clusterservice connecting to address host port nov am hazelcastclient connection warning connection closed by server warning hazelcastclient connection connection closed by server nov am hazelcastclient connection warning connection closed by server warning hazelcastclient connection connection closed by server traceback most recent call last file client py line in my map put key str random key value str random key file home ubuntu local lib site packages hazelcast future py line in f return result result file home ubuntu local lib site packages hazelcast future py line in result six reraise self exception class self exception self traceback file home ubuntu local lib site packages hazelcast future py line in callback future set result continuation func f args file home ubuntu local lib site packages hazelcast proxy base py line in default response handler response future result file home ubuntu local lib site packages hazelcast future py line in result six reraise self exception class self exception self traceback file line in reraise hazelcast exception timeouterror request timed out after seconds exception in thread hazelcast reactor most likely raised during interpreter shutdown traceback most recent call last file usr lib threading py line in bootstrap inner file usr lib threading py line in run file home ubuntu local lib site packages hazelcast reactor py line in loop nonetype object has no attribute error
| 1
|
40,873
| 10,208,719,902
|
IssuesEvent
|
2019-08-14 10:50:32
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
`com.mysql.jdbc.Driver'. This is deprecated.
|
T: Defect
|
### Expected behavior and actual behavior:
[INFO] --- jooq-codegen-maven:3.11.11:generate (jooq) @ sw-jooq ---
[INFO] Database : Inferring driver com.mysql.jdbc.Driver from URL jdbc:mysql://114.11.......
Loading class **`com.mysql.jdbc.Driver'. This is deprecated.** The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
### Versions:
- jOOQ:3.11.11
- Java:8.0
- Database (include vendor):mysql
- OS:window
- JDBC Driver (include name if inofficial driver):
|
1.0
|
`com.mysql.jdbc.Driver'. This is deprecated. - ### Expected behavior and actual behavior:
[INFO] --- jooq-codegen-maven:3.11.11:generate (jooq) @ sw-jooq ---
[INFO] Database : Inferring driver com.mysql.jdbc.Driver from URL jdbc:mysql://114.11.......
Loading class **`com.mysql.jdbc.Driver'. This is deprecated.** The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
### Versions:
- jOOQ:3.11.11
- Java:8.0
- Database (include vendor):mysql
- OS:window
- JDBC Driver (include name if inofficial driver):
|
defect
|
com mysql jdbc driver this is deprecated expected behavior and actual behavior jooq codegen maven generate jooq sw jooq database inferring driver com mysql jdbc driver from url jdbc mysql loading class com mysql jdbc driver this is deprecated the new driver class is com mysql cj jdbc driver the driver is automatically registered via the spi and manual loading of the driver class is generally unnecessary steps to reproduce the problem if possible create an mcve versions jooq java database include vendor mysql os window jdbc driver include name if inofficial driver
| 1
|
68,079
| 21,472,940,631
|
IssuesEvent
|
2022-04-26 11:11:53
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
`/_synapse/admin/v1/delete_group` admin api fails with 500 error
|
A-Communities S-Tolerable T-Defect
|
<!--
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Trying to remove user from group sometimes fails, adjacently https://github.com/matrix-org/synapse/blob/master/docs/admin_api/delete_group.md fails too because of same rootcause. Troublesome users seems to be existing on matrix.org server rather than our own hackab.fi one.
ADD: seems at least one other user in one group is from jkl.hacklab.fi so this might be generally federating the removal issue)
Groups that had members only from hacklab.fi got removed just fine with the api.
### Steps to reproduce
- try removing user from matrix.org HS from community on own HS
and/or
- try using admin api delete group which fails on removing such user and fails itself
```
curl -w "\n\nResponse code: %{response_code}\n\n" -s \
-X POST -H "Authorization: Bearer <New_format_is_nicely_short>" \
-H "Content-Type: application/json" \
'http://localhost:8008/_synapse/admin/v1/delete_group/+community:hacklab.fi'
{"errcode":"M_UNKNOWN","error":"Internal server error"}
Response code: 500
2021-05-18 17:41:04,696 - synapse.http.matrixfederationclient - 618 - WARNING - POST-793486 - {POST-O-569346} [matrix.org] Request failed: POST matrix://matrix.org/_matrix/federation/v1/groups/local/%2Bcommunity%3Ahacklab.fi/users/%40luminix%3Amatrix.org/remove: HttpResponseException('400: Bad Request')
```
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
<!-- Was this issue identified on matrix.org or another homeserver? -->
- **Homeserver**: hacklab.fi
<!--
What version of Synapse is running?
You can find the Synapse version with this command:
$ curl http://localhost:8008/_synapse/admin/v1/server_version
(You may need to replace `localhost:8008` if Synapse is not configured to
listen on that port.)
-->
- **Version**: 1.34.0
- **Install method**:
matrix.org provided debian repo w/ apt
|
1.0
|
`/_synapse/admin/v1/delete_group` admin api fails with 500 error - <!--
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Trying to remove user from group sometimes fails, adjacently https://github.com/matrix-org/synapse/blob/master/docs/admin_api/delete_group.md fails too because of same rootcause. Troublesome users seems to be existing on matrix.org server rather than our own hackab.fi one.
ADD: seems at least one other user in one group is from jkl.hacklab.fi so this might be generally federating the removal issue)
Groups that had members only from hacklab.fi got removed just fine with the api.
### Steps to reproduce
- try removing user from matrix.org HS from community on own HS
and/or
- try using admin api delete group which fails on removing such user and fails itself
```
curl -w "\n\nResponse code: %{response_code}\n\n" -s \
-X POST -H "Authorization: Bearer <New_format_is_nicely_short>" \
-H "Content-Type: application/json" \
'http://localhost:8008/_synapse/admin/v1/delete_group/+community:hacklab.fi'
{"errcode":"M_UNKNOWN","error":"Internal server error"}
Response code: 500
2021-05-18 17:41:04,696 - synapse.http.matrixfederationclient - 618 - WARNING - POST-793486 - {POST-O-569346} [matrix.org] Request failed: POST matrix://matrix.org/_matrix/federation/v1/groups/local/%2Bcommunity%3Ahacklab.fi/users/%40luminix%3Amatrix.org/remove: HttpResponseException('400: Bad Request')
```
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
<!-- Was this issue identified on matrix.org or another homeserver? -->
- **Homeserver**: hacklab.fi
<!--
What version of Synapse is running?
You can find the Synapse version with this command:
$ curl http://localhost:8008/_synapse/admin/v1/server_version
(You may need to replace `localhost:8008` if Synapse is not configured to
listen on that port.)
-->
- **Version**: 1.34.0
- **Install method**:
matrix.org provided debian repo w/ apt
|
defect
|
synapse admin delete group admin api fails with error this is not a support channel if you have support questions about running or configuring your own home server please ask in synapse matrix org using a matrix org account if necessary if you want to report a security issue please see this is a bug report template by following the instructions below and filling out the sections with your information you will help the us to get all the necessary data to fix your issue you can also preview your report before submitting it you may remove sections that aren t relevant to your particular case text between marks will be invisible in the report description trying to remove user from group sometimes fails adjacently fails too because of same rootcause troublesome users seems to be existing on matrix org server rather than our own hackab fi one add seems at least one other user in one group is from jkl hacklab fi so this might be generally federating the removal issue groups that had members only from hacklab fi got removed just fine with the api steps to reproduce try removing user from matrix org hs from community on own hs and or try using admin api delete group which fails on removing such user and fails itself curl w n nresponse code response code n n s x post h authorization bearer h content type application json errcode m unknown error internal server error response code synapse http matrixfederationclient warning post post o request failed post matrix matrix org matrix federation groups local fi users org remove httpresponseexception bad request version information homeserver hacklab fi what version of synapse is running you can find the synapse version with this command curl you may need to replace localhost if synapse is not configured to listen on that port version install method matrix org provided debian repo w apt
| 1
|
49,722
| 13,187,257,111
|
IssuesEvent
|
2020-08-13 02:50:38
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
Documentation builder has no Geant4 (Trac #1934)
|
Incomplete Migration Migrated from Trac defect infrastructure
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1934">https://code.icecube.wisc.edu/ticket/1934</a>, reported by jgonzalez and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"description": "I improved g4-tankresponse's doxygen (to comply with ticket #1309). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.",
"reporter": "jgonzalez",
"cc": "",
"resolution": "worksforme",
"_ts": "1550067318169976",
"component": "infrastructure",
"summary": "Documentation builder has no Geant4",
"priority": "minor",
"keywords": "",
"time": "2017-01-19T15:13:16",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Documentation builder has no Geant4 (Trac #1934) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1934">https://code.icecube.wisc.edu/ticket/1934</a>, reported by jgonzalez and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"description": "I improved g4-tankresponse's doxygen (to comply with ticket #1309). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.",
"reporter": "jgonzalez",
"cc": "",
"resolution": "worksforme",
"_ts": "1550067318169976",
"component": "infrastructure",
"summary": "Documentation builder has no Geant4",
"priority": "minor",
"keywords": "",
"time": "2017-01-19T15:13:16",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
documentation builder has no trac migrated from json status closed changetime description i improved tankresponse s doxygen to comply with ticket this has never made it to the documentation page because does not seem to be installed in the builder reporter jgonzalez cc resolution worksforme ts component infrastructure summary documentation builder has no priority minor keywords time milestone owner nega type defect
| 1
|
43,410
| 7,044,838,830
|
IssuesEvent
|
2018-01-01 10:17:24
|
cbovar/ConvNetSharp
|
https://api.github.com/repos/cbovar/ConvNetSharp
|
closed
|
Cuda / CuDNN does not work if project is created with NuGet packages
|
documentation
|
I created an empty solution, added a new project and added the NuGet-Packages.
Then I added the example code from: https://github.com/cbovar/ConvNetSharp
It works fine with CPU, but as soon as I try to run it GPU it does not find the DLL. If I switch the build to x64 it seems to find the dll, but an exception is thrown:
> System.NotImplementedException occurred
HResult=0x80004001
Message=The method or operation is not implemented.
Source=ConvNetSharp.Volume
StackTrace:
at ConvNetSharp.Volume.Single.VolumeBuilder.SameAs(VolumeStorage`1 example, Shape shape)
at ConvNetSharp.Core.Layers.LayerBase`1.DoForward(Volume`1 input, Boolean isTraining)
at ConvNetSharp.Core.Net`1.Forward(Volume`1 input, Boolean isTraining)
at XXX.Program.Main(String[] args) in C:\Users\XXX\documents\visual studio 2017\Projects\XXX\XXX\Program.cs:line 41
on the line:
`var prob = net.Forward(x);`
|
1.0
|
Cuda / CuDNN does not work if project is created with NuGet packages - I created an empty solution, added a new project and added the NuGet-Packages.
Then I added the example code from: https://github.com/cbovar/ConvNetSharp
It works fine with CPU, but as soon as I try to run it GPU it does not find the DLL. If I switch the build to x64 it seems to find the dll, but an exception is thrown:
> System.NotImplementedException occurred
HResult=0x80004001
Message=The method or operation is not implemented.
Source=ConvNetSharp.Volume
StackTrace:
at ConvNetSharp.Volume.Single.VolumeBuilder.SameAs(VolumeStorage`1 example, Shape shape)
at ConvNetSharp.Core.Layers.LayerBase`1.DoForward(Volume`1 input, Boolean isTraining)
at ConvNetSharp.Core.Net`1.Forward(Volume`1 input, Boolean isTraining)
at XXX.Program.Main(String[] args) in C:\Users\XXX\documents\visual studio 2017\Projects\XXX\XXX\Program.cs:line 41
on the line:
`var prob = net.Forward(x);`
|
non_defect
|
cuda cudnn does not work if project is created with nuget packages i created an empty solution added a new project and added the nuget packages then i added the example code from it works fine with cpu but as soon as i try to run it gpu it does not find the dll if i switch the build to it seems to find the dll but an exception is thrown system notimplementedexception occurred hresult message the method or operation is not implemented source convnetsharp volume stacktrace at convnetsharp volume single volumebuilder sameas volumestorage example shape shape at convnetsharp core layers layerbase doforward volume input boolean istraining at convnetsharp core net forward volume input boolean istraining at xxx program main string args in c users xxx documents visual studio projects xxx xxx program cs line on the line var prob net forward x
| 0
|
17,320
| 2,998,878,125
|
IssuesEvent
|
2015-07-23 16:08:08
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
AWSJoiner returns 401
|
Team: Integration Type: Defect
|
The AWSJoiner doesn't work correctly for most regions. us-east-1 works fine, but the others I have tried fail.
```
import com.hazelcast.core.Hazelcast;
public class Member {
public static void main(String[] args) {
Hazelcast.newHazelcastInstance();
}
}
<hazelcast xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.2.xsd"
xmlns="http://www.hazelcast.com/schema/config">
<network>
<join>
<aws enabled="true">
<access-key>xxxx</access-key>
<secret-key>yyyyy</secret-key>
<region>eu-west-1</region>
<security-group-name>peter</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</join>
</network>
</hazelcast>
```
I get exceptions like this:
```
Jul 23, 2015 7:16:08 AM com.hazelcast.cluster.impl.TcpIpJoinerOverAWS
WARNING: [192.168.122.1]:5701 [dev] [3.6-SNAPSHOT] Server returned HTTP response code: 401 for URL: https://ec2.amazonaws.com/?Action=DescribeInstances&Version=2014-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXX%2F20150723%2Feu-west-1%2Fec2%2Faws4_request&X-Amz-Date=20150723T041607Z&X-Amz-Expires=30&X-Amz-Signature=YYYYYYYYYYYYYYY&X-Amz-SignedHeaders=host
java.io.IOException: Server returned HTTP response code: 401 for URL: https://ec2.amazonaws.com/?Action=DescribeInstances&Version=2014-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXX%2F20150723%2Feu-west-1%2Fec2%2Faws4_request&X-Amz-Date=20150723T041607Z&X-Amz-Expires=30&X-Amz-Signature=YYYYYYYYYYYYYYYY&X-Amz-SignedHeaders=host
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1626)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at com.hazelcast.aws.impl.DescribeInstances.callService(DescribeInstances.java:81)
at com.hazelcast.aws.impl.DescribeInstances.execute(DescribeInstances.java:70)
at com.hazelcast.aws.AWSClient.getPrivateIpAddresses(AWSClient.java:45)
at com.hazelcast.cluster.impl.TcpIpJoinerOverAWS.getMembers(TcpIpJoinerOverAWS.java:46)
at com.hazelcast.cluster.impl.TcpIpJoiner.getPossibleAddresses(TcpIpJoiner.java:396)
at com.hazelcast.cluster.impl.TcpIpJoiner.joinViaPossibleMembers(TcpIpJoiner.java:126)
at com.hazelcast.cluster.impl.TcpIpJoiner.doJoin(TcpIpJoiner.java:86)
at com.hazelcast.cluster.impl.AbstractJoiner.join(AbstractJoiner.java:94)
at com.hazelcast.instance.Node.join(Node.java:534)
at com.hazelcast.instance.Node.start(Node.java:343)
at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:132)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:152)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:135)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:111)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:87)
at Member.main(Member.java:5)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
```
|
1.0
|
AWSJoiner returns 401 - The AWSJoiner doesn't work correctly for most regions. us-east-1 works fine, but the others I have tried fail.
```
import com.hazelcast.core.Hazelcast;
public class Member {
public static void main(String[] args) {
Hazelcast.newHazelcastInstance();
}
}
<hazelcast xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.2.xsd"
xmlns="http://www.hazelcast.com/schema/config">
<network>
<join>
<aws enabled="true">
<access-key>xxxx</access-key>
<secret-key>yyyyy</secret-key>
<region>eu-west-1</region>
<security-group-name>peter</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</join>
</network>
</hazelcast>
```
I get exceptions like this:
```
Jul 23, 2015 7:16:08 AM com.hazelcast.cluster.impl.TcpIpJoinerOverAWS
WARNING: [192.168.122.1]:5701 [dev] [3.6-SNAPSHOT] Server returned HTTP response code: 401 for URL: https://ec2.amazonaws.com/?Action=DescribeInstances&Version=2014-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXX%2F20150723%2Feu-west-1%2Fec2%2Faws4_request&X-Amz-Date=20150723T041607Z&X-Amz-Expires=30&X-Amz-Signature=YYYYYYYYYYYYYYY&X-Amz-SignedHeaders=host
java.io.IOException: Server returned HTTP response code: 401 for URL: https://ec2.amazonaws.com/?Action=DescribeInstances&Version=2014-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXX%2F20150723%2Feu-west-1%2Fec2%2Faws4_request&X-Amz-Date=20150723T041607Z&X-Amz-Expires=30&X-Amz-Signature=YYYYYYYYYYYYYYYY&X-Amz-SignedHeaders=host
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1626)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at com.hazelcast.aws.impl.DescribeInstances.callService(DescribeInstances.java:81)
at com.hazelcast.aws.impl.DescribeInstances.execute(DescribeInstances.java:70)
at com.hazelcast.aws.AWSClient.getPrivateIpAddresses(AWSClient.java:45)
at com.hazelcast.cluster.impl.TcpIpJoinerOverAWS.getMembers(TcpIpJoinerOverAWS.java:46)
at com.hazelcast.cluster.impl.TcpIpJoiner.getPossibleAddresses(TcpIpJoiner.java:396)
at com.hazelcast.cluster.impl.TcpIpJoiner.joinViaPossibleMembers(TcpIpJoiner.java:126)
at com.hazelcast.cluster.impl.TcpIpJoiner.doJoin(TcpIpJoiner.java:86)
at com.hazelcast.cluster.impl.AbstractJoiner.join(AbstractJoiner.java:94)
at com.hazelcast.instance.Node.join(Node.java:534)
at com.hazelcast.instance.Node.start(Node.java:343)
at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:132)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:152)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:135)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:111)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:87)
at Member.main(Member.java:5)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
```
|
defect
|
awsjoiner returns the awsjoiner doesn t work correctly for most regions us east works fine but the others i have tried fail import com hazelcast core hazelcast public class member public static void main string args hazelcast newhazelcastinstance hazelcast xmlns xsi xsi schemalocation xmlns xxxx yyyyy eu west peter type hz nodes i get exceptions like this jul am com hazelcast cluster impl tcpipjoineroveraws warning server returned http response code for url java io ioexception server returned http response code for url at sun net at sun net at com hazelcast aws impl describeinstances callservice describeinstances java at com hazelcast aws impl describeinstances execute describeinstances java at com hazelcast aws awsclient getprivateipaddresses awsclient java at com hazelcast cluster impl tcpipjoineroveraws getmembers tcpipjoineroveraws java at com hazelcast cluster impl tcpipjoiner getpossibleaddresses tcpipjoiner java at com hazelcast cluster impl tcpipjoiner joinviapossiblemembers tcpipjoiner java at com hazelcast cluster impl tcpipjoiner dojoin tcpipjoiner java at com hazelcast cluster impl abstractjoiner join abstractjoiner java at com hazelcast instance node join node java at com hazelcast instance node start node java at com hazelcast instance hazelcastinstanceimpl hazelcastinstanceimpl java at com hazelcast instance hazelcastinstancefactory constructhazelcastinstance hazelcastinstancefactory java at com hazelcast instance hazelcastinstancefactory newhazelcastinstance hazelcastinstancefactory java at com hazelcast instance hazelcastinstancefactory newhazelcastinstance hazelcastinstancefactory java at com hazelcast core hazelcast newhazelcastinstance hazelcast java at member main member java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com intellij rt execution application appmain main appmain java
| 1
|
4,648
| 2,610,137,099
|
IssuesEvent
|
2015-02-26 18:43:15
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Different results on network game
|
auto-migrated Priority-Medium Type-Defect
|
```
Hello. I'm cannot say how to reproduce this, but...
Today we play Hedgewars network game (we doing this everyday :)), and at the
end of game (last turn), two computers show different results of the turn - on
the one computer last hedge was died due to barrels explosion, but on another
computer this hedge was flying above barrel, and still alive. And finally,
network disconnection indicator appear (but network was online, and no any
disconnects happens (this is 1Gb LAN direct connection, about 3 meters between
computers)).
If I can help with this, logs, debug, etc - just tell me.
Thanks for your game.
Play on:
Ubuntu 10.10 amd64, updated, Hedgewars 0.9.13-1
Gentoo ~amd64 updated, Hedgewars 0.9.13
```
-----
Original issue reported on code.google.com by `k0l0b0k.void@gmail.com` on 24 Sep 2010 at 7:07
|
1.0
|
Different results on network game - ```
Hello. I'm cannot say how to reproduce this, but...
Today we play Hedgewars network game (we doing this everyday :)), and at the
end of game (last turn), two computers show different results of the turn - on
the one computer last hedge was died due to barrels explosion, but on another
computer this hedge was flying above barrel, and still alive. And finally,
network disconnection indicator appear (but network was online, and no any
disconnects happens (this is 1Gb LAN direct connection, about 3 meters between
computers)).
If I can help with this, logs, debug, etc - just tell me.
Thanks for your game.
Play on:
Ubuntu 10.10 amd64, updated, Hedgewars 0.9.13-1
Gentoo ~amd64 updated, Hedgewars 0.9.13
```
-----
Original issue reported on code.google.com by `k0l0b0k.void@gmail.com` on 24 Sep 2010 at 7:07
|
defect
|
different results on network game hello i m cannot say how to reproduce this but today we play hedgewars network game we doing this everyday and at the end of game last turn two computers show different results of the turn on the one computer last hedge was died due to barrels explosion but on another computer this hedge was flying above barrel and still alive and finally network disconnection indicator appear but network was online and no any disconnects happens this is lan direct connection about meters between computers if i can help with this logs debug etc just tell me thanks for your game play on ubuntu updated hedgewars gentoo updated hedgewars original issue reported on code google com by void gmail com on sep at
| 1
|
10,698
| 2,622,180,818
|
IssuesEvent
|
2015-03-04 00:18:45
|
byzhang/leveldb
|
https://api.github.com/repos/byzhang/leveldb
|
opened
|
cppcheck - warning (leveldb-1.10.0)
|
auto-migrated Priority-Medium Type-Defect
|
```
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::total' is not
initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::num_initialized'
is not initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::num_done' is not
initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::start' is not
initialized in the constructor.
[db\db_bench.cc:294]: (warning) Member variable 'ThreadState::shared' is not
initialized in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::batch' is not
initialized in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::sync' is not initialized
in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::done' is not initialized
in the constructor.
[db\version_set.cc:170]: (warning) Member variable
'LevelFileNumIterator::value_buf_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::start_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::last_op_finish_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::done_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::next_report_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::db_num_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::start_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::last_op_finish_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::done_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::next_report_' is not initialized in the constructor.
[util\cache.cc:170]: (warning) Member variable 'LRUCache::capacity_' is not
initialized in the constructor.
[util\env_posix.cc:267]: (portability) The extra qualification
'PosixMmapFile::' is unnecessary and is considered an error by many compilers.
```
Original issue reported on code.google.com by `Pavel.Pimenov@gmail.com` on 22 May 2013 at 8:06
|
1.0
|
cppcheck - warning (leveldb-1.10.0) - ```
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::total' is not
initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::num_initialized'
is not initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::num_done' is not
initialized in the constructor.
[db\db_bench.cc:284]: (warning) Member variable 'SharedState::start' is not
initialized in the constructor.
[db\db_bench.cc:294]: (warning) Member variable 'ThreadState::shared' is not
initialized in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::batch' is not
initialized in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::sync' is not initialized
in the constructor.
[db\db_impl.cc:46]: (warning) Member variable 'Writer::done' is not initialized
in the constructor.
[db\version_set.cc:170]: (warning) Member variable
'LevelFileNumIterator::value_buf_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::start_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::last_op_finish_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::done_' is not initialized in the constructor.
[doc\bench\db_bench_sqlite3.cc:315]: (warning) Member variable
'Benchmark::next_report_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::db_num_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::start_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::last_op_finish_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::done_' is not initialized in the constructor.
[doc\bench\db_bench_tree_db.cc:291]: (warning) Member variable
'Benchmark::next_report_' is not initialized in the constructor.
[util\cache.cc:170]: (warning) Member variable 'LRUCache::capacity_' is not
initialized in the constructor.
[util\env_posix.cc:267]: (portability) The extra qualification
'PosixMmapFile::' is unnecessary and is considered an error by many compilers.
```
Original issue reported on code.google.com by `Pavel.Pimenov@gmail.com` on 22 May 2013 at 8:06
|
defect
|
cppcheck warning leveldb warning member variable sharedstate total is not initialized in the constructor warning member variable sharedstate num initialized is not initialized in the constructor warning member variable sharedstate num done is not initialized in the constructor warning member variable sharedstate start is not initialized in the constructor warning member variable threadstate shared is not initialized in the constructor warning member variable writer batch is not initialized in the constructor warning member variable writer sync is not initialized in the constructor warning member variable writer done is not initialized in the constructor warning member variable levelfilenumiterator value buf is not initialized in the constructor warning member variable benchmark start is not initialized in the constructor warning member variable benchmark last op finish is not initialized in the constructor warning member variable benchmark done is not initialized in the constructor warning member variable benchmark next report is not initialized in the constructor warning member variable benchmark db num is not initialized in the constructor warning member variable benchmark start is not initialized in the constructor warning member variable benchmark last op finish is not initialized in the constructor warning member variable benchmark done is not initialized in the constructor warning member variable benchmark next report is not initialized in the constructor warning member variable lrucache capacity is not initialized in the constructor portability the extra qualification posixmmapfile is unnecessary and is considered an error by many compilers original issue reported on code google com by pavel pimenov gmail com on may at
| 1
|
51,875
| 12,822,214,006
|
IssuesEvent
|
2020-07-06 09:26:03
|
siodb/siodb
|
https://api.github.com/repos/siodb/siodb
|
closed
|
SQL Dump doesn't compile on CentOS 7
|
component:build status:resolved type:bug
|
**Issue**
```
================================================================================
Build Settings:
Distro: CentOS 7.8.2003
Debug build: 0
Build unit tests: 0
CC=gcc
CXX=g++
LD=g++
CFLAGS=-pthread -g3 -fPIC -std=gnu11 -Wall -Wextra -Werror -Wpedantic -Wno-unused-value -fmax-errors=5 -isystem /opt/siodb/dep/openssl-1.1.1g/include -isystem /opt/siodb/dep/xxHash-0.7.2/include -I/siodbbuild/siodb/common/lib -I/siodbbuild/siodb/build/release/generated-src/siodb-generated/common/lib -D_GNU_SOURCE -O3 -fno-omit-frame-pointer
CXXFLAGS=-pthread -g3 -fPIC -std=gnu++17 -Wall -Wextra -Werror -Wpedantic -Wno-unused-value -fmax-errors=5 -isystem /opt/siodb/dep/antlr4-runtime-4.8/include/antlr4-runtime -I/usr/include/boost169 -isystem /opt/siodb/dep/date-20190911/include -I/usr/local/include/gtest-gmock-1.8.1 -I/usr/local/include/oatpp-1.1.0 -isystem /opt/siodb/dep/openssl-1.1.1g/include -isystem /opt/siodb/dep/protobuf-3.11.4/include -isystem /opt/siodb/dep/utf8cpp-3.1/include -isystem /opt/siodb/dep/xxHash-0.7.2/include -I/siodbbuild/siodb/common/lib -I/siodbbuild/siodb/build/release/generated-src/siodb-generated/common/lib -D_GNU_SOURCE -DBOOST_ALL_DYN_LINK -DBOOST_FILESYSTEM_NO_DEPRECATED -O3 -fno-omit-frame-pointer
LDFLAGS=-L/opt/siodb/dep/antlr4-runtime-4.8/lib -Wl,-rpath -Wl,/opt/siodb/dep/antlr4-runtime-4.8/lib -L/usr/lib64/boost169 -L/opt/siodb/dep/date-20190911/lib -Wl,-rpath -Wl,/opt/siodb/dep/date-20190911/lib -L/opt/siodb/dep/openssl-1.1.1g/lib -Wl,-rpath -Wl,/opt/siodb/dep/openssl-1.1.1g/lib -L/opt/siodb/dep/protobuf-3.11.4/lib -Wl,-rpath -Wl,/opt/siodb/dep/protobuf-3.11.4/lib -L/opt/siodb/dep/utf8cpp-3.1/lib -Wl,-rpath -Wl,/opt/siodb/dep/utf8cpp-3.1/lib -L/opt/siodb/dep/xxHash-0.7.2/lib -Wl,-rpath -Wl,/opt/siodb/dep/xxHash-0.7.2/lib -pthread -g3 -rdynamic
================================================================================
CXX /siodbbuild/siodb/build/release/obj/siocli/lib/SqlDump.o
/siodbbuild/siodb/siocli/lib/SqlDump.cpp: In function 'std::vector<siodb::siocli::{anonymous}::ColumnConstaint> siodb::siocli::{anonymous}::receiveColumnConstraintsList(siodb::io::IoBase&, siodb::protobuf::CustomProtobufInputStream&, const string&, int64_t)':
/siodbbuild/siodb/siocli/lib/SqlDump.cpp:45:8: error: 'constraint' may be used uninitialized in this function [-Werror=maybe-uninitialized]
struct ColumnConstaint {
^~~~~~~~~~~~~~~
/siodbbuild/siodb/siocli/lib/SqlDump.cpp:365:25: note: 'constraint' was declared here
ColumnConstaint constraint;
^~~~~~~~~~
cc1plus: all warnings being treated as errors
make[2]: *** [/siodbbuild/siodb/mk//Main.mk:368: /siodbbuild/siodb/build/release/obj/siocli/lib/SqlDump.o] Error 1
make[2]: Leaving directory '/siodbbuild/siodb/siocli/lib'
make[1]: *** [Makefile:11: all] Error 2
make[1]: Leaving directory '/siodbbuild/siodb/siocli'
make: *** [Makefile:57: release-no-ut] Error 2
```
|
1.0
|
SQL Dump doesn't compile on CentOS 7 - **Issue**
```
================================================================================
Build Settings:
Distro: CentOS 7.8.2003
Debug build: 0
Build unit tests: 0
CC=gcc
CXX=g++
LD=g++
CFLAGS=-pthread -g3 -fPIC -std=gnu11 -Wall -Wextra -Werror -Wpedantic -Wno-unused-value -fmax-errors=5 -isystem /opt/siodb/dep/openssl-1.1.1g/include -isystem /opt/siodb/dep/xxHash-0.7.2/include -I/siodbbuild/siodb/common/lib -I/siodbbuild/siodb/build/release/generated-src/siodb-generated/common/lib -D_GNU_SOURCE -O3 -fno-omit-frame-pointer
CXXFLAGS=-pthread -g3 -fPIC -std=gnu++17 -Wall -Wextra -Werror -Wpedantic -Wno-unused-value -fmax-errors=5 -isystem /opt/siodb/dep/antlr4-runtime-4.8/include/antlr4-runtime -I/usr/include/boost169 -isystem /opt/siodb/dep/date-20190911/include -I/usr/local/include/gtest-gmock-1.8.1 -I/usr/local/include/oatpp-1.1.0 -isystem /opt/siodb/dep/openssl-1.1.1g/include -isystem /opt/siodb/dep/protobuf-3.11.4/include -isystem /opt/siodb/dep/utf8cpp-3.1/include -isystem /opt/siodb/dep/xxHash-0.7.2/include -I/siodbbuild/siodb/common/lib -I/siodbbuild/siodb/build/release/generated-src/siodb-generated/common/lib -D_GNU_SOURCE -DBOOST_ALL_DYN_LINK -DBOOST_FILESYSTEM_NO_DEPRECATED -O3 -fno-omit-frame-pointer
LDFLAGS=-L/opt/siodb/dep/antlr4-runtime-4.8/lib -Wl,-rpath -Wl,/opt/siodb/dep/antlr4-runtime-4.8/lib -L/usr/lib64/boost169 -L/opt/siodb/dep/date-20190911/lib -Wl,-rpath -Wl,/opt/siodb/dep/date-20190911/lib -L/opt/siodb/dep/openssl-1.1.1g/lib -Wl,-rpath -Wl,/opt/siodb/dep/openssl-1.1.1g/lib -L/opt/siodb/dep/protobuf-3.11.4/lib -Wl,-rpath -Wl,/opt/siodb/dep/protobuf-3.11.4/lib -L/opt/siodb/dep/utf8cpp-3.1/lib -Wl,-rpath -Wl,/opt/siodb/dep/utf8cpp-3.1/lib -L/opt/siodb/dep/xxHash-0.7.2/lib -Wl,-rpath -Wl,/opt/siodb/dep/xxHash-0.7.2/lib -pthread -g3 -rdynamic
================================================================================
CXX /siodbbuild/siodb/build/release/obj/siocli/lib/SqlDump.o
/siodbbuild/siodb/siocli/lib/SqlDump.cpp: In function 'std::vector<siodb::siocli::{anonymous}::ColumnConstaint> siodb::siocli::{anonymous}::receiveColumnConstraintsList(siodb::io::IoBase&, siodb::protobuf::CustomProtobufInputStream&, const string&, int64_t)':
/siodbbuild/siodb/siocli/lib/SqlDump.cpp:45:8: error: 'constraint' may be used uninitialized in this function [-Werror=maybe-uninitialized]
struct ColumnConstaint {
^~~~~~~~~~~~~~~
/siodbbuild/siodb/siocli/lib/SqlDump.cpp:365:25: note: 'constraint' was declared here
ColumnConstaint constraint;
^~~~~~~~~~
cc1plus: all warnings being treated as errors
make[2]: *** [/siodbbuild/siodb/mk//Main.mk:368: /siodbbuild/siodb/build/release/obj/siocli/lib/SqlDump.o] Error 1
make[2]: Leaving directory '/siodbbuild/siodb/siocli/lib'
make[1]: *** [Makefile:11: all] Error 2
make[1]: Leaving directory '/siodbbuild/siodb/siocli'
make: *** [Makefile:57: release-no-ut] Error 2
```
|
non_defect
|
sql dump doesn t compile on centos issue build settings distro centos debug build build unit tests cc gcc cxx g ld g cflags pthread fpic std wall wextra werror wpedantic wno unused value fmax errors isystem opt siodb dep openssl include isystem opt siodb dep xxhash include i siodbbuild siodb common lib i siodbbuild siodb build release generated src siodb generated common lib d gnu source fno omit frame pointer cxxflags pthread fpic std gnu wall wextra werror wpedantic wno unused value fmax errors isystem opt siodb dep runtime include runtime i usr include isystem opt siodb dep date include i usr local include gtest gmock i usr local include oatpp isystem opt siodb dep openssl include isystem opt siodb dep protobuf include isystem opt siodb dep include isystem opt siodb dep xxhash include i siodbbuild siodb common lib i siodbbuild siodb build release generated src siodb generated common lib d gnu source dboost all dyn link dboost filesystem no deprecated fno omit frame pointer ldflags l opt siodb dep runtime lib wl rpath wl opt siodb dep runtime lib l usr l opt siodb dep date lib wl rpath wl opt siodb dep date lib l opt siodb dep openssl lib wl rpath wl opt siodb dep openssl lib l opt siodb dep protobuf lib wl rpath wl opt siodb dep protobuf lib l opt siodb dep lib wl rpath wl opt siodb dep lib l opt siodb dep xxhash lib wl rpath wl opt siodb dep xxhash lib pthread rdynamic cxx siodbbuild siodb build release obj siocli lib sqldump o siodbbuild siodb siocli lib sqldump cpp in function std vector siodb siocli anonymous receivecolumnconstraintslist siodb io iobase siodb protobuf customprotobufinputstream const string t siodbbuild siodb siocli lib sqldump cpp error constraint may be used uninitialized in this function struct columnconstaint siodbbuild siodb siocli lib sqldump cpp note constraint was declared here columnconstaint constraint all warnings being treated as errors make error make leaving directory siodbbuild siodb siocli lib make error make leaving directory siodbbuild siodb siocli make error
| 0
|
308,513
| 9,440,163,906
|
IssuesEvent
|
2019-04-14 15:55:22
|
HabitRPG/habitica
|
https://api.github.com/repos/HabitRPG/habitica
|
closed
|
cancelled quests should have message in party chat
|
priority: minor section: Party Page status: issue: in progress
|
When the quest owner or party leader cancels a quest that's still in the invitation phase, there's no indication for other party members that that's what happened and it can lead to confusion (there was a recent bug report from someone who understandably thought that something had gone wrong with the quest).
There should be a message posted to party chat to say, for example, "Alys cancelled the party quest Vice, Part 1: Free Yourself of the Dragon's Influence."
A similar message already exists for the case where a running quest is aborted so similar code could be used for this issue:
```
$ git grep "the party quest"
test/api/v3/integration/quests/POST-groups_groupid_quests_abort.test.js:130: expect(Group.prototype.sendChat).to.be.calledWithMatch(/aborted the party quest Wail of the Whale.`/);
website/server/controllers/api-v3/quests.js:425: const newChatMessage = group.sendChat(`\`${user.profile.name} aborted the party quest ${questName}.\``);
/code/habitica 2048 $
```
Also, as part of the fix for this issue, please change the comment shown below to remove ` (see questCancel for BEFORE)` because the function it refers to is now called `cancelQuest` and it's likely that we'll forget to update the comment if we rename it again :)
https://github.com/HabitRPG/habitica/blob/a2261e35919a0396d2e74ea70f209b68ebc004c6/website/server/controllers/api-v3/quests.js#L408
|
1.0
|
cancelled quests should have message in party chat - When the quest owner or party leader cancels a quest that's still in the invitation phase, there's no indication for other party members that that's what happened and it can lead to confusion (there was a recent bug report from someone who understandably thought that something had gone wrong with the quest).
There should be a message posted to party chat to say, for example, "Alys cancelled the party quest Vice, Part 1: Free Yourself of the Dragon's Influence."
A similar message already exists for the case where a running quest is aborted so similar code could be used for this issue:
```
$ git grep "the party quest"
test/api/v3/integration/quests/POST-groups_groupid_quests_abort.test.js:130: expect(Group.prototype.sendChat).to.be.calledWithMatch(/aborted the party quest Wail of the Whale.`/);
website/server/controllers/api-v3/quests.js:425: const newChatMessage = group.sendChat(`\`${user.profile.name} aborted the party quest ${questName}.\``);
/code/habitica 2048 $
```
Also, as part of the fix for this issue, please change the comment shown below to remove ` (see questCancel for BEFORE)` because the function it refers to is now called `cancelQuest` and it's likely that we'll forget to update the comment if we rename it again :)
https://github.com/HabitRPG/habitica/blob/a2261e35919a0396d2e74ea70f209b68ebc004c6/website/server/controllers/api-v3/quests.js#L408
|
non_defect
|
cancelled quests should have message in party chat when the quest owner or party leader cancels a quest that s still in the invitation phase there s no indication for other party members that that s what happened and it can lead to confusion there was a recent bug report from someone who understandably thought that something had gone wrong with the quest there should be a message posted to party chat to say for example alys cancelled the party quest vice part free yourself of the dragon s influence a similar message already exists for the case where a running quest is aborted so similar code could be used for this issue git grep the party quest test api integration quests post groups groupid quests abort test js expect group prototype sendchat to be calledwithmatch aborted the party quest wail of the whale website server controllers api quests js const newchatmessage group sendchat user profile name aborted the party quest questname code habitica also as part of the fix for this issue please change the comment shown below to remove see questcancel for before because the function it refers to is now called cancelquest and it s likely that we ll forget to update the comment if we rename it again
| 0
|
81,287
| 30,783,077,724
|
IssuesEvent
|
2023-07-31 11:25:07
|
vector-im/element-x-ios
|
https://api.github.com/repos/vector-im/element-x-ios
|
closed
|
Avatar for a DM use the default instead of the avatar of that user
|
A-Sliding Sync T-Defect X-Needs-Backend S-Minor O-Frequent A-Avatar A-DMs Z-Schedule
|
### Steps to reproduce
1. Start a DM with another user, note the room avatar
2. Tap on the room title to open the room details screen, note the avatar
3. Go back to the room timeline, note the avatar
### Outcome
#### What did you expect?
When in a DM the avatar should be that of the other user.
#### What happened instead?
The avatar is the default, in this case an "A" on a green background.
#### Screenshot of room avatar in the room timeline view

#### Screenshot of room avatar in the room details view

### Your phone model
iPhone 12 mini
### Operating system version
iOS 16.3.1
### Application version
Element 1.1.0 (214)
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Avatar for a DM use the default instead of the avatar of that user - ### Steps to reproduce
1. Start a DM with another user, note the room avatar
2. Tap on the room title to open the room details screen, note the avatar
3. Go back to the room timeline, note the avatar
### Outcome
#### What did you expect?
When in a DM the avatar should be that of the other user.
#### What happened instead?
The avatar is the default, in this case an "A" on a green background.
#### Screenshot of room avatar in the room timeline view

#### Screenshot of room avatar in the room details view

### Your phone model
iPhone 12 mini
### Operating system version
iOS 16.3.1
### Application version
Element 1.1.0 (214)
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
avatar for a dm use the default instead of the avatar of that user steps to reproduce start a dm with another user note the room avatar tap on the room title to open the room details screen note the avatar go back to the room timeline note the avatar outcome what did you expect when in a dm the avatar should be that of the other user what happened instead the avatar is the default in this case an a on a green background screenshot of room avatar in the room timeline view screenshot of room avatar in the room details view your phone model iphone mini operating system version ios application version element homeserver matrix org will you send logs no
| 1
|
35,006
| 7,520,958,471
|
IssuesEvent
|
2018-04-12 15:45:58
|
bridgedotnet/CLI
|
https://api.github.com/repos/bridgedotnet/CLI
|
closed
|
Console default foreground color is not preserved
|
defect help wanted
|
If I run bridge.exe from powershell, bridge.exe can permanently change the console color to gray:

|
1.0
|
Console default foreground color is not preserved - If I run bridge.exe from powershell, bridge.exe can permanently change the console color to gray:

|
defect
|
console default foreground color is not preserved if i run bridge exe from powershell bridge exe can permanently change the console color to gray
| 1
|
24,379
| 3,969,159,914
|
IssuesEvent
|
2016-05-03 22:17:54
|
zaproxy/zaproxy
|
https://api.github.com/repos/zaproxy/zaproxy
|
closed
|
RexEx API Issues
|
InsufficientEvidence Priority-Medium Type-Defect
|
```
It appears that whenever I make an API call to regex search an active ZAP session I
receive a "Bad View" error message. I don't think I am doing a bad regex as I am keeping
it simple and the error message is Bad View, which suggests that the API (urlsByUrlRegex)
is not being called or received properly. I am using Python bindings v2 0.0.7 and my
ZAP version is 2.2.2.
```
Original issue reported on code.google.com by `silverbackventuresllc` on 2014-02-15 18:32:57
<hr>
* *Attachment: zap-proxy.png<br>*
|
1.0
|
RexEx API Issues - ```
It appears that whenever I make an API call to regex search an active ZAP session I
receive a "Bad View" error message. I don't think I am doing a bad regex as I am keeping
it simple and the error message is Bad View, which suggests that the API (urlsByUrlRegex)
is not being called or received properly. I am using Python bindings v2 0.0.7 and my
ZAP version is 2.2.2.
```
Original issue reported on code.google.com by `silverbackventuresllc` on 2014-02-15 18:32:57
<hr>
* *Attachment: zap-proxy.png<br>*
|
defect
|
rexex api issues it appears that whenever i make an api call to regex search an active zap session i receive a bad view error message i don t think i am doing a bad regex as i am keeping it simple and the error message is bad view which suggests that the api urlsbyurlregex is not being called or received properly i am using python bindings and my zap version is original issue reported on code google com by silverbackventuresllc on attachment zap proxy png
| 1
|
57,545
| 15,836,501,134
|
IssuesEvent
|
2021-04-06 19:25:56
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
zpool list fails weirdly for empty-name pools
|
Status: Triage Needed Type: Defect
|
### System information
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | mostly current sid (save for kernel) as of Sun, 04 Apr 2021 19:50:42 +0200
Linux Kernel | Linux babtop 5.10.0-3-amd64 #1 SMP Debian 5.10.13-1 (2021-02-06) x86_64 GNU/Linux
Architecture | see above
ZFS Version | 2.0.3-1 kernel, 2.0.3-5 userspace
### Describe the problem you're observing
Compare these two:
```
nabijaczleweli@babtop:/tmp$ /sbin/zpool list "a"
cannot open 'a': no such pool
nabijaczleweli@babtop:/tmp$ /sbin/zpool list ""
interval cannot be zero
usage:
list [-gHLpPv] [-o property[,...]] [-T d|u] [pool] ...
[interval [count]]
the following properties are supported:
PROPERTY EDIT VALUES
allocated NO <size>
capacity NO <size>
checkpoint NO <size>
dedupratio NO <1.00x or higher if deduped>
expandsize NO <size>
fragmentation NO <percent>
free NO <size>
freeing NO <size>
guid NO <guid>
health NO <state>
leaked NO <size>
load_guid NO <load_guid>
size NO <size>
altroot YES <path>
ashift YES <ashift, 9-16, or 0=default>
autoexpand YES on | off
autoreplace YES on | off
autotrim YES on | off
bootfs YES <filesystem>
cachefile YES <file> | none
comment YES <comment-string>
delegation YES on | off
failmode YES wait | continue | panic
listsnapshots YES on | off
multihost YES on | off
readonly YES on | off
version YES <version>
feature@... YES disabled | enabled | active
The feature@ properties must be appended with a feature name.
See zpool-features(5).
```
Compare this to how zpool-get handles this:
```
nabijaczleweli@babtop:/tmp$ /sbin/zpool get bootfs ""
cannot open '': name must begin with a letter
nabijaczleweli@babtop:/tmp$ /sbin/zpool get bootfs "a"
cannot open 'a': no such pool
```
### Describe how to reproduce the problem
See above.
|
1.0
|
zpool list fails weirdly for empty-name pools - ### System information
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | mostly current sid (save for kernel) as of Sun, 04 Apr 2021 19:50:42 +0200
Linux Kernel | Linux babtop 5.10.0-3-amd64 #1 SMP Debian 5.10.13-1 (2021-02-06) x86_64 GNU/Linux
Architecture | see above
ZFS Version | 2.0.3-1 kernel, 2.0.3-5 userspace
### Describe the problem you're observing
Compare these two:
```
nabijaczleweli@babtop:/tmp$ /sbin/zpool list "a"
cannot open 'a': no such pool
nabijaczleweli@babtop:/tmp$ /sbin/zpool list ""
interval cannot be zero
usage:
list [-gHLpPv] [-o property[,...]] [-T d|u] [pool] ...
[interval [count]]
the following properties are supported:
PROPERTY EDIT VALUES
allocated NO <size>
capacity NO <size>
checkpoint NO <size>
dedupratio NO <1.00x or higher if deduped>
expandsize NO <size>
fragmentation NO <percent>
free NO <size>
freeing NO <size>
guid NO <guid>
health NO <state>
leaked NO <size>
load_guid NO <load_guid>
size NO <size>
altroot YES <path>
ashift YES <ashift, 9-16, or 0=default>
autoexpand YES on | off
autoreplace YES on | off
autotrim YES on | off
bootfs YES <filesystem>
cachefile YES <file> | none
comment YES <comment-string>
delegation YES on | off
failmode YES wait | continue | panic
listsnapshots YES on | off
multihost YES on | off
readonly YES on | off
version YES <version>
feature@... YES disabled | enabled | active
The feature@ properties must be appended with a feature name.
See zpool-features(5).
```
Compare this to how zpool-get handles this:
```
nabijaczleweli@babtop:/tmp$ /sbin/zpool get bootfs ""
cannot open '': name must begin with a letter
nabijaczleweli@babtop:/tmp$ /sbin/zpool get bootfs "a"
cannot open 'a': no such pool
```
### Describe how to reproduce the problem
See above.
|
defect
|
zpool list fails weirdly for empty name pools system information type version name distribution name debian distribution version mostly current sid save for kernel as of sun apr linux kernel linux babtop smp debian gnu linux architecture see above zfs version kernel userspace describe the problem you re observing compare these two nabijaczleweli babtop tmp sbin zpool list a cannot open a no such pool nabijaczleweli babtop tmp sbin zpool list interval cannot be zero usage list the following properties are supported property edit values allocated no capacity no checkpoint no dedupratio no expandsize no fragmentation no free no freeing no guid no health no leaked no load guid no size no altroot yes ashift yes autoexpand yes on off autoreplace yes on off autotrim yes on off bootfs yes cachefile yes none comment yes delegation yes on off failmode yes wait continue panic listsnapshots yes on off multihost yes on off readonly yes on off version yes feature yes disabled enabled active the feature properties must be appended with a feature name see zpool features compare this to how zpool get handles this nabijaczleweli babtop tmp sbin zpool get bootfs cannot open name must begin with a letter nabijaczleweli babtop tmp sbin zpool get bootfs a cannot open a no such pool describe how to reproduce the problem see above
| 1
|
751,046
| 26,228,676,758
|
IssuesEvent
|
2023-01-04 21:22:31
|
octokit/webhooks.js
|
https://api.github.com/repos/octokit/webhooks.js
|
closed
|
Deprecate `onUnhandledRequest` middleware option
|
Status: Up for grabs Type: Maintenance Priority: Normal
|
Followup to #740, we should deprecate the `onUnhandledRequest` middleware option, and release that PR that is currently sitting in the `beta` branch
|
1.0
|
Deprecate `onUnhandledRequest` middleware option - Followup to #740, we should deprecate the `onUnhandledRequest` middleware option, and release that PR that is currently sitting in the `beta` branch
|
non_defect
|
deprecate onunhandledrequest middleware option followup to we should deprecate the onunhandledrequest middleware option and release that pr that is currently sitting in the beta branch
| 0
|
581,634
| 17,313,572,668
|
IssuesEvent
|
2021-07-27 00:38:42
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
UI Design - Namex transaction history enhancements (post MVP)
|
ENTITY Priority2 UX
|
### UI Design Mockup
* Condensed
* Condensed with Guides
* Scott Approved! ✅
* Tracey Approved! ✅
https://projects.invisionapp.com/share/XC11DSLDDZU6
### Business Logic
* Described in the comments on #7805
|
1.0
|
UI Design - Namex transaction history enhancements (post MVP) - ### UI Design Mockup
* Condensed
* Condensed with Guides
* Scott Approved! ✅
* Tracey Approved! ✅
https://projects.invisionapp.com/share/XC11DSLDDZU6
### Business Logic
* Described in the comments on #7805
|
non_defect
|
ui design namex transaction history enhancements post mvp ui design mockup condensed condensed with guides scott approved ✅ tracey approved ✅ business logic described in the comments on
| 0
|
14,687
| 2,831,388,453
|
IssuesEvent
|
2015-05-24 15:53:22
|
nobodyguy/dslrdashboard
|
https://api.github.com/repos/nobodyguy/dslrdashboard
|
closed
|
airpad momo9 running 4.0 will not find camera
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
latest version from play store 4.0 ics
Please provide any additional information below.
i am running the wireless router with openwrt that works fin on phone and other
tablet (kindle fire) (hacked) but cannot seem to open camera from airpad 7p
running 4.0 ics opencv was downloaded when dslrdashboard opened but does not
seem to work.. what am i doing wrong??
```
Original issue reported on code.google.com by `Keith.e....@gmail.com` on 15 Apr 2014 at 7:29
|
1.0
|
airpad momo9 running 4.0 will not find camera - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
latest version from play store 4.0 ics
Please provide any additional information below.
i am running the wireless router with openwrt that works fin on phone and other
tablet (kindle fire) (hacked) but cannot seem to open camera from airpad 7p
running 4.0 ics opencv was downloaded when dslrdashboard opened but does not
seem to work.. what am i doing wrong??
```
Original issue reported on code.google.com by `Keith.e....@gmail.com` on 15 Apr 2014 at 7:29
|
defect
|
airpad running will not find camera what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system latest version from play store ics please provide any additional information below i am running the wireless router with openwrt that works fin on phone and other tablet kindle fire hacked but cannot seem to open camera from airpad running ics opencv was downloaded when dslrdashboard opened but does not seem to work what am i doing wrong original issue reported on code google com by keith e gmail com on apr at
| 1
|
29,395
| 5,680,756,986
|
IssuesEvent
|
2017-04-13 02:46:04
|
beefproject/beef
|
https://api.github.com/repos/beefproject/beef
|
closed
|
beef cannot detect any details abount hooked browser
|
Defect
|
#### Environment
What version/revision of BeEF are you using?
version: '0.4.7.0-alpha'
On what version of Ruby?
ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
On what browser?
All
On what operating system?
Windows, Linux
#### Configuration
# BeEF Configuration file
beef:
version: '0.4.7.0-alpha'
# More verbose messages (server-side)
debug: false
# More verbose messages (client-side)
client_debug: false
# Used for generating secure tokens
crypto_default_value_length: 80
# Interface / IP restrictions
restrictions:
# subnet of IP addresses that can hook to the framework
permitted_hooking_subnet: "0.0.0.0/0"
# subnet of IP addresses that can connect to the admin UI
#permitted_ui_subnet: "127.0.0.1/32"
permitted_ui_subnet: "0.0.0.0/0"
# HTTP server
http:
debug: true #Thin::Logging.debug, very verbose. Prints also full exception stack trace.
host: "0.0.0.0"
port: "5555"
# Decrease this setting to 1,000 (ms) if you want more responsiveness
# when sending modules and retrieving results.
# NOTE: A poll timeout of less than 5,000 (ms) might impact performance
# when hooking lots of browsers (50+).
# Enabling WebSockets is generally better (beef.websocket.enable)
xhr_poll_timeout: 1000
# Reverse Proxy / NAT
# If BeEF is running behind a reverse proxy or NAT
# set the public hostname and port here
#public: "" # public hostname/IP address
#public_port: "" # experimental
# DNS
dns_host: "8.8.8.8"
dns_port: 53
# Web Admin user interface URI
web_ui_basepath: "/xxxxxui"
# Hook
hook_file: "/fhgfhgfdh.js"
hook_session_name: "fdsafdsa"
session_cookie_name: "hgffhfdh"
# Allow one or multiple origins to access the RESTful API using CORS
# For multiple origins use: "http://browserhacker.com, http://domain2.com"
restful_api:
allow_cors: false
cors_allowed_domains: "http://browserhacker.com"
# Prefer WebSockets over XHR-polling when possible.
websocket:
enable: true
port: 61985 # WS: good success rate through proxies
# Use encrypted 'WebSocketSecure'
# NOTE: works only on HTTPS domains and with HTTPS support enabled in BeEF
secure: true
secure_port: 61986 # WSSecure
ws_poll_timeout: 1000 # poll BeEF every second
# Imitate a specified web server (default root page, 404 default error page, 'Server' HTTP response header)
web_server_imitation:
enable: true
type: "apache" # Supported: apache, iis, nginx
hook_404: false # inject BeEF hook in HTTP 404 responses
hook_root: false # inject BeEF hook in the server home page
# Experimental HTTPS support for the hook / admin / all other Thin managed web services
https:
enable: false
# In production environments, be sure to use a valid certificate signed for the value
# used in beef.http.dns_host (the domain name of the server where you run BeEF)
key: "beef_key.pem"
cert: "beef_cert.pem"
database:
# For information on using other databases please read the
# README.databases file
# supported DBs: sqlite, mysql, postgres
# NOTE: you must change the Gemfile adding a gem require line like:
# gem "dm-postgres-adapter"
# or
# gem "dm-mysql-adapter"
# if you want to switch drivers from sqlite to postgres (or mysql).
# Finally, run a 'bundle install' command and start BeEF.
driver: "sqlite"
# db_file is only used for sqlite
db_file: "db/beef.db"
# db connection information is only used for mysql/postgres
db_host: "localhost"
db_port: 3306
db_name: "beef"
db_user: "beef"
db_passwd: "beef"
db_encoding: "UTF-8"
# Credentials to authenticate in BeEF.
# Used by both the RESTful API and the Admin_UI extension
credentials:
user: "beef"
passwd: "beef"
# Autorun Rule Engine
autorun:
# this is used when rule chain_mode type is nested-forward, needed as command results are checked via setInterval
# to ensure that we can wait for async command results. The timeout is needed to prevent infinite loops or eventually
# continue execution regardless of results.
# If you're chaining multiple async modules, and you expect them to complete in more than 5 seconds, increase the timeout.
result_poll_interval: 300
result_poll_timeout: 5000
# If the modules doesn't return status/results and timeout exceeded, continue anyway with the chain.
# This is useful to call modules (nested-forward chain mode) that are not returning their status/results.
continue_after_timeout: true
# Enables DNS lookups on zombie IP addresses
dns_hostname_lookup: true
# IP Geolocation
# NOTE: requires MaxMind database:
# curl -O http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
# gunzip GeoLiteCity.dat.gz && mkdir /opt/GeoIP && mv GeoLiteCity.dat /opt/GeoIP
geoip:
enable: true
database: '/opt/GeoIP/GeoLiteCity.dat'
# Integration with PhishingFrenzy
# If enabled BeEF will try to get the UID parameter value from the hooked URI, as this is used by PhishingFrenzy
# to uniquely identify the victims. In this way you can easily associate phishing emails with hooked browser.
integration:
phishing_frenzy:
enable: false
# You may override default extension configuration parameters here
extension:
requester:
enable: true
proxy:
enable: true
key: "beef_key.pem"
cert: "beef_cert.pem"
metasploit:
enable: false
social_engineering:
enable: true
evasion:
enable: false
console:
shell:
enable: false
ipec:
enable: true
# this is still experimental..
# Disable it in kali because it doesn't work with the current
# version of ruby-rubydns (older version is required by beef-xss)
dns:
enable: false
# this is still experimental..
dns_rebinding:
enable: false
#### Summary
I'm able to hook browsers, they appear in beef UI. But beef cannot gather any information about hooked browser, only IP.
#### Expected Behaviour
Beef should gather information about hooked browser, as stated in wiki: browser, OS, plugins, etc.
#### Actual Behaviour
beef cannot gather any information about hooked browser, only IP.
#### Steps to Reproduce
Usual beef setup.
|
1.0
|
beef cannot detect any details abount hooked browser - #### Environment
What version/revision of BeEF are you using?
version: '0.4.7.0-alpha'
On what version of Ruby?
ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
On what browser?
All
On what operating system?
Windows, Linux
#### Configuration
# BeEF Configuration file
beef:
version: '0.4.7.0-alpha'
# More verbose messages (server-side)
debug: false
# More verbose messages (client-side)
client_debug: false
# Used for generating secure tokens
crypto_default_value_length: 80
# Interface / IP restrictions
restrictions:
# subnet of IP addresses that can hook to the framework
permitted_hooking_subnet: "0.0.0.0/0"
# subnet of IP addresses that can connect to the admin UI
#permitted_ui_subnet: "127.0.0.1/32"
permitted_ui_subnet: "0.0.0.0/0"
# HTTP server
http:
debug: true #Thin::Logging.debug, very verbose. Prints also full exception stack trace.
host: "0.0.0.0"
port: "5555"
# Decrease this setting to 1,000 (ms) if you want more responsiveness
# when sending modules and retrieving results.
# NOTE: A poll timeout of less than 5,000 (ms) might impact performance
# when hooking lots of browsers (50+).
# Enabling WebSockets is generally better (beef.websocket.enable)
xhr_poll_timeout: 1000
# Reverse Proxy / NAT
# If BeEF is running behind a reverse proxy or NAT
# set the public hostname and port here
#public: "" # public hostname/IP address
#public_port: "" # experimental
# DNS
dns_host: "8.8.8.8"
dns_port: 53
# Web Admin user interface URI
web_ui_basepath: "/xxxxxui"
# Hook
hook_file: "/fhgfhgfdh.js"
hook_session_name: "fdsafdsa"
session_cookie_name: "hgffhfdh"
# Allow one or multiple origins to access the RESTful API using CORS
# For multiple origins use: "http://browserhacker.com, http://domain2.com"
restful_api:
allow_cors: false
cors_allowed_domains: "http://browserhacker.com"
# Prefer WebSockets over XHR-polling when possible.
websocket:
enable: true
port: 61985 # WS: good success rate through proxies
# Use encrypted 'WebSocketSecure'
# NOTE: works only on HTTPS domains and with HTTPS support enabled in BeEF
secure: true
secure_port: 61986 # WSSecure
ws_poll_timeout: 1000 # poll BeEF every second
# Imitate a specified web server (default root page, 404 default error page, 'Server' HTTP response header)
web_server_imitation:
enable: true
type: "apache" # Supported: apache, iis, nginx
hook_404: false # inject BeEF hook in HTTP 404 responses
hook_root: false # inject BeEF hook in the server home page
# Experimental HTTPS support for the hook / admin / all other Thin managed web services
https:
enable: false
# In production environments, be sure to use a valid certificate signed for the value
# used in beef.http.dns_host (the domain name of the server where you run BeEF)
key: "beef_key.pem"
cert: "beef_cert.pem"
database:
# For information on using other databases please read the
# README.databases file
# supported DBs: sqlite, mysql, postgres
# NOTE: you must change the Gemfile adding a gem require line like:
# gem "dm-postgres-adapter"
# or
# gem "dm-mysql-adapter"
# if you want to switch drivers from sqlite to postgres (or mysql).
# Finally, run a 'bundle install' command and start BeEF.
driver: "sqlite"
# db_file is only used for sqlite
db_file: "db/beef.db"
# db connection information is only used for mysql/postgres
db_host: "localhost"
db_port: 3306
db_name: "beef"
db_user: "beef"
db_passwd: "beef"
db_encoding: "UTF-8"
# Credentials to authenticate in BeEF.
# Used by both the RESTful API and the Admin_UI extension
credentials:
user: "beef"
passwd: "beef"
# Autorun Rule Engine
autorun:
# this is used when rule chain_mode type is nested-forward, needed as command results are checked via setInterval
# to ensure that we can wait for async command results. The timeout is needed to prevent infinite loops or eventually
# continue execution regardless of results.
# If you're chaining multiple async modules, and you expect them to complete in more than 5 seconds, increase the timeout.
result_poll_interval: 300
result_poll_timeout: 5000
# If the modules doesn't return status/results and timeout exceeded, continue anyway with the chain.
# This is useful to call modules (nested-forward chain mode) that are not returning their status/results.
continue_after_timeout: true
# Enables DNS lookups on zombie IP addresses
dns_hostname_lookup: true
# IP Geolocation
# NOTE: requires MaxMind database:
# curl -O http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
# gunzip GeoLiteCity.dat.gz && mkdir /opt/GeoIP && mv GeoLiteCity.dat /opt/GeoIP
geoip:
enable: true
database: '/opt/GeoIP/GeoLiteCity.dat'
# Integration with PhishingFrenzy
# If enabled BeEF will try to get the UID parameter value from the hooked URI, as this is used by PhishingFrenzy
# to uniquely identify the victims. In this way you can easily associate phishing emails with hooked browser.
integration:
phishing_frenzy:
enable: false
# You may override default extension configuration parameters here
extension:
requester:
enable: true
proxy:
enable: true
key: "beef_key.pem"
cert: "beef_cert.pem"
metasploit:
enable: false
social_engineering:
enable: true
evasion:
enable: false
console:
shell:
enable: false
ipec:
enable: true
# this is still experimental..
# Disable it in kali because it doesn't work with the current
# version of ruby-rubydns (older version is required by beef-xss)
dns:
enable: false
# this is still experimental..
dns_rebinding:
enable: false
#### Summary
I'm able to hook browsers, they appear in beef UI. But beef cannot gather any information about hooked browser, only IP.
#### Expected Behaviour
Beef should gather information about hooked browser, as stated in wiki: browser, OS, plugins, etc.
#### Actual Behaviour
beef cannot gather any information about hooked browser, only IP.
#### Steps to Reproduce
Usual beef setup.
|
defect
|
beef cannot detect any details abount hooked browser environment what version revision of beef are you using version alpha on what version of ruby ruby on what browser all on what operating system windows linux configuration beef configuration file beef version alpha more verbose messages server side debug false more verbose messages client side client debug false used for generating secure tokens crypto default value length interface ip restrictions restrictions subnet of ip addresses that can hook to the framework permitted hooking subnet subnet of ip addresses that can connect to the admin ui permitted ui subnet permitted ui subnet http server http debug true thin logging debug very verbose prints also full exception stack trace host port decrease this setting to ms if you want more responsiveness when sending modules and retrieving results note a poll timeout of less than ms might impact performance when hooking lots of browsers enabling websockets is generally better beef websocket enable xhr poll timeout reverse proxy nat if beef is running behind a reverse proxy or nat set the public hostname and port here public public hostname ip address public port experimental dns dns host dns port web admin user interface uri web ui basepath xxxxxui hook hook file fhgfhgfdh js hook session name fdsafdsa session cookie name hgffhfdh allow one or multiple origins to access the restful api using cors for multiple origins use restful api allow cors false cors allowed domains prefer websockets over xhr polling when possible websocket enable true port ws good success rate through proxies use encrypted websocketsecure note works only on https domains and with https support enabled in beef secure true secure port wssecure ws poll timeout poll beef every second imitate a specified web server default root page default error page server http response header web server imitation enable true type apache supported apache iis nginx hook false inject beef hook in http responses hook root false inject beef hook in the server home page experimental https support for the hook admin all other thin managed web services https enable false in production environments be sure to use a valid certificate signed for the value used in beef http dns host the domain name of the server where you run beef key beef key pem cert beef cert pem database for information on using other databases please read the readme databases file supported dbs sqlite mysql postgres note you must change the gemfile adding a gem require line like gem dm postgres adapter or gem dm mysql adapter if you want to switch drivers from sqlite to postgres or mysql finally run a bundle install command and start beef driver sqlite db file is only used for sqlite db file db beef db db connection information is only used for mysql postgres db host localhost db port db name beef db user beef db passwd beef db encoding utf credentials to authenticate in beef used by both the restful api and the admin ui extension credentials user beef passwd beef autorun rule engine autorun this is used when rule chain mode type is nested forward needed as command results are checked via setinterval to ensure that we can wait for async command results the timeout is needed to prevent infinite loops or eventually continue execution regardless of results if you re chaining multiple async modules and you expect them to complete in more than seconds increase the timeout result poll interval result poll timeout if the modules doesn t return status results and timeout exceeded continue anyway with the chain this is useful to call modules nested forward chain mode that are not returning their status results continue after timeout true enables dns lookups on zombie ip addresses dns hostname lookup true ip geolocation note requires maxmind database curl o gunzip geolitecity dat gz mkdir opt geoip mv geolitecity dat opt geoip geoip enable true database opt geoip geolitecity dat integration with phishingfrenzy if enabled beef will try to get the uid parameter value from the hooked uri as this is used by phishingfrenzy to uniquely identify the victims in this way you can easily associate phishing emails with hooked browser integration phishing frenzy enable false you may override default extension configuration parameters here extension requester enable true proxy enable true key beef key pem cert beef cert pem metasploit enable false social engineering enable true evasion enable false console shell enable false ipec enable true this is still experimental disable it in kali because it doesn t work with the current version of ruby rubydns older version is required by beef xss dns enable false this is still experimental dns rebinding enable false summary i m able to hook browsers they appear in beef ui but beef cannot gather any information about hooked browser only ip expected behaviour beef should gather information about hooked browser as stated in wiki browser os plugins etc actual behaviour beef cannot gather any information about hooked browser only ip steps to reproduce usual beef setup
| 1
|
24,762
| 3,908,743,459
|
IssuesEvent
|
2016-04-19 16:51:11
|
Spreads/Spreads
|
https://api.github.com/repos/Spreads/Spreads
|
closed
|
Backpressure and conflation
|
design enhancement performance
|
> So I was wondering how Spreads deals with backpressure and a slow consumer in real-time.
> If I take the index example from your readme, it Repeats all of the input price Series and joins them into a new Series. When I start consuming the result, it ticks every time any of the inputs has a tick. Obviously I could sample the result to get a slower rate, but is there a way of conflating it, so I get results as fast as I can consume them, skipping out intermediate results which I can’t use? Apologies if this is obvious, but I’m just starting to find my way around the source at the moment.
Backpressure is not an issue because Spreads series are pull-based. We assume that data is written to ISeries, more specifically to some IOrderedMap by a parallel process that is able to consume all the data from a source. Spreads's cursors then try to catch up as fast as they can similar to the Disruptor pattern. When consumers could catch up, they await on any write operation to the maps and react immediately, so it is reactive pull.
To get only the latest data, you could just use Last property of inputs. But this is a tricky point. Joining N continuous series is non deterministic in general, but real-world ticks data is usually serialized by some trade id. This is work-in-progress and the relevant piece of code is [here](https://github.com/Spreads/Spreads/blob/master/src/Spreads.Collections/Spreads.Series.fs#L1428). I was going to implement the logic you describe via some settings flag, but now it is just an exception.
So if you have a case when your computations are heavy and consumers cannot catch up, skipping intermediate data in favor of the very latest points is probably a good solution. I will return to this idea and create a cursor for such a use case.
|
1.0
|
Backpressure and conflation - > So I was wondering how Spreads deals with backpressure and a slow consumer in real-time.
> If I take the index example from your readme, it Repeats all of the input price Series and joins them into a new Series. When I start consuming the result, it ticks every time any of the inputs has a tick. Obviously I could sample the result to get a slower rate, but is there a way of conflating it, so I get results as fast as I can consume them, skipping out intermediate results which I can’t use? Apologies if this is obvious, but I’m just starting to find my way around the source at the moment.
Backpressure is not an issue because Spreads series are pull-based. We assume that data is written to ISeries, more specifically to some IOrderedMap by a parallel process that is able to consume all the data from a source. Spreads's cursors then try to catch up as fast as they can similar to the Disruptor pattern. When consumers could catch up, they await on any write operation to the maps and react immediately, so it is reactive pull.
To get only the latest data, you could just use Last property of inputs. But this is a tricky point. Joining N continuous series is non deterministic in general, but real-world ticks data is usually serialized by some trade id. This is work-in-progress and the relevant piece of code is [here](https://github.com/Spreads/Spreads/blob/master/src/Spreads.Collections/Spreads.Series.fs#L1428). I was going to implement the logic you describe via some settings flag, but now it is just an exception.
So if you have a case when your computations are heavy and consumers cannot catch up, skipping intermediate data in favor of the very latest points is probably a good solution. I will return to this idea and create a cursor for such a use case.
|
non_defect
|
backpressure and conflation so i was wondering how spreads deals with backpressure and a slow consumer in real time if i take the index example from your readme it repeats all of the input price series and joins them into a new series when i start consuming the result it ticks every time any of the inputs has a tick obviously i could sample the result to get a slower rate but is there a way of conflating it so i get results as fast as i can consume them skipping out intermediate results which i can’t use apologies if this is obvious but i’m just starting to find my way around the source at the moment backpressure is not an issue because spreads series are pull based we assume that data is written to iseries more specifically to some iorderedmap by a parallel process that is able to consume all the data from a source spreads s cursors then try to catch up as fast as they can similar to the disruptor pattern when consumers could catch up they await on any write operation to the maps and react immediately so it is reactive pull to get only the latest data you could just use last property of inputs but this is a tricky point joining n continuous series is non deterministic in general but real world ticks data is usually serialized by some trade id this is work in progress and the relevant piece of code is i was going to implement the logic you describe via some settings flag but now it is just an exception so if you have a case when your computations are heavy and consumers cannot catch up skipping intermediate data in favor of the very latest points is probably a good solution i will return to this idea and create a cursor for such a use case
| 0
|
20,208
| 3,562,300,498
|
IssuesEvent
|
2016-01-24 12:18:41
|
postcss/postcss.org
|
https://api.github.com/repos/postcss/postcss.org
|
opened
|
Sketch for plugin landing-page
|
design needed
|
We need a sketch for how the plugin landing page will look like
My suggestions:
* Search box to find plugins #17
* Featured plugins?
* List of plugins
Ref #56 #17
|
1.0
|
Sketch for plugin landing-page - We need a sketch for how the plugin landing page will look like
My suggestions:
* Search box to find plugins #17
* Featured plugins?
* List of plugins
Ref #56 #17
|
non_defect
|
sketch for plugin landing page we need a sketch for how the plugin landing page will look like my suggestions search box to find plugins featured plugins list of plugins ref
| 0
|
30,592
| 6,189,501,209
|
IssuesEvent
|
2017-07-04 13:07:04
|
el-mejor/LifeTimeV3
|
https://api.github.com/repos/el-mejor/LifeTimeV3
|
closed
|
Default Font - even when Arial isn't available - is there a generic default font?
|
defect
|
Otherwise there is a crash, when the hard coded default font isn't found
|
1.0
|
Default Font - even when Arial isn't available - is there a generic default font? - Otherwise there is a crash, when the hard coded default font isn't found
|
defect
|
default font even when arial isn t available is there a generic default font otherwise there is a crash when the hard coded default font isn t found
| 1
|
15,553
| 2,860,394,326
|
IssuesEvent
|
2015-06-03 15:39:00
|
yannrichet/jmathplot
|
https://api.github.com/repos/yannrichet/jmathplot
|
closed
|
library does not contain org.math.array package
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. include the pckage
What is the expected output? What do you see instead?
expect it to the exist and for the tutorial to work as planned; it does not
exist, so cannot import anything from org.math.array.
What version of the product are you using? On what operating system?
2009 version, Windows 8
Please provide any additional information below.
```
Original issue reported on code.google.com by `byron.yi...@gmail.com` on 29 Jun 2014 at 8:24
|
1.0
|
library does not contain org.math.array package - ```
What steps will reproduce the problem?
1. include the pckage
What is the expected output? What do you see instead?
expect it to the exist and for the tutorial to work as planned; it does not
exist, so cannot import anything from org.math.array.
What version of the product are you using? On what operating system?
2009 version, Windows 8
Please provide any additional information below.
```
Original issue reported on code.google.com by `byron.yi...@gmail.com` on 29 Jun 2014 at 8:24
|
defect
|
library does not contain org math array package what steps will reproduce the problem include the pckage what is the expected output what do you see instead expect it to the exist and for the tutorial to work as planned it does not exist so cannot import anything from org math array what version of the product are you using on what operating system version windows please provide any additional information below original issue reported on code google com by byron yi gmail com on jun at
| 1
|
137,693
| 30,736,520,582
|
IssuesEvent
|
2023-07-28 08:06:20
|
postmanlabs/postman-app-support
|
https://api.github.com/repos/postmanlabs/postman-app-support
|
closed
|
Postman Extension for VSCode is not working
|
bug vscode
|
### Is there an existing issue for this?
- [X] I have searched the tracker for existing similar issues and I know that duplicates will be closed
### Describe the Issue
The first time I installed the extension provided from the marketplace, there was no problem with the operation, but from some point it does not operate normally.
### Steps To Reproduce
1. Install the [extension](https://marketplace.visualstudio.com/items?itemName=Postman.postman-for-vscode)
2. Open that extension from the side menu -> **infinite Loading**
3. After entering Postman through the command palette
4. Run the provided commands (All commands give the following error)
- Error Message Pattern: `command 'postman-for-vscode.*' not found`
### Screenshots or Videos

### Operating System
macOS
### Postman Version
13.4
### Postman Platform
Both
### User Account Type
Signed In User
### Additional Context?
_No response_
|
1.0
|
Postman Extension for VSCode is not working - ### Is there an existing issue for this?
- [X] I have searched the tracker for existing similar issues and I know that duplicates will be closed
### Describe the Issue
The first time I installed the extension provided from the marketplace, there was no problem with the operation, but from some point it does not operate normally.
### Steps To Reproduce
1. Install the [extension](https://marketplace.visualstudio.com/items?itemName=Postman.postman-for-vscode)
2. Open that extension from the side menu -> **infinite Loading**
3. After entering Postman through the command palette
4. Run the provided commands (All commands give the following error)
- Error Message Pattern: `command 'postman-for-vscode.*' not found`
### Screenshots or Videos

### Operating System
macOS
### Postman Version
13.4
### Postman Platform
Both
### User Account Type
Signed In User
### Additional Context?
_No response_
|
non_defect
|
postman extension for vscode is not working is there an existing issue for this i have searched the tracker for existing similar issues and i know that duplicates will be closed describe the issue the first time i installed the extension provided from the marketplace there was no problem with the operation but from some point it does not operate normally steps to reproduce install the open that extension from the side menu infinite loading after entering postman through the command palette run the provided commands all commands give the following error error message pattern command postman for vscode not found screenshots or videos operating system macos postman version postman platform both user account type signed in user additional context no response
| 0
|
38,305
| 8,752,943,199
|
IssuesEvent
|
2018-12-14 06:10:01
|
zealdocs/zeal
|
https://api.github.com/repos/zealdocs/zeal
|
closed
|
Docset not loading
|
resolution/invalid scope/misc/docsets type/defect
|
I created a new docset, but I can't get it to show up in the interface or produce any errors/debug. Please let me know how I can get some sort of output to debug my docset, or suggest another way to do so. I've created the docset directory in the APPDATA location on Windows along with the other 3 working docsets I have. Thanks!! Looking forward to using this EXTENSIVELY.
|
1.0
|
Docset not loading - I created a new docset, but I can't get it to show up in the interface or produce any errors/debug. Please let me know how I can get some sort of output to debug my docset, or suggest another way to do so. I've created the docset directory in the APPDATA location on Windows along with the other 3 working docsets I have. Thanks!! Looking forward to using this EXTENSIVELY.
|
defect
|
docset not loading i created a new docset but i can t get it to show up in the interface or produce any errors debug please let me know how i can get some sort of output to debug my docset or suggest another way to do so i ve created the docset directory in the appdata location on windows along with the other working docsets i have thanks looking forward to using this extensively
| 1
|
38,807
| 8,967,109,764
|
IssuesEvent
|
2019-01-29 01:49:51
|
svigerske/Ipopt
|
https://api.github.com/repos/svigerske/Ipopt
|
closed
|
MATLAB crashes when IPOPT converges to machine precision
|
Ipopt defect major
|
Issue created by migration from Trac.
Original creator: drosos.kourounis
Original creation time: 2014-11-02 17:59:59
Assignee: ipopt-team
Version: 3.11
Dear all,
I observed, that MATLAB crashes when IPOPT converges very close or even beyond the machine precision. I tried that both with the mex binaries provided at the site version 3.11.8 and the mex files compiled manually following the instructions for version 3.11.9. Both Matlab R2012b and R2014a were tested. It seems that it is independent of any compilation details and appears only when some of the several tolerances (it is not clear which one) converges to or below machine precision. Below
I am running the examplelasso.m, where the tolerance was set to 1e-17 to force the reproduction of the bug. At iteration 14 MATLAB crashes so we do not see the iteration 15. The same thing happens for several different examples when the tolerance is set close to machine epsilon.
```
This is Ipopt version 3.11.9, running with linear solver ma57.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 32
Number of nonzeros in Lagrangian Hessian.............: 36
Total number of variables............................: 16
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 16
inequality constraints with only lower bounds: 16
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) |d| lg(rg) alpha_du alpha_pr ls
0 2.0362309e+03 0.00e+00 3.39e+01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.2058144e+03 0.00e+00 2.54e+01 -6.0 3.86e+00 - 2.12e-01 2.56e-01f 1
2 7.1857478e+01 0.00e+00 5.91e-01 0.1 7.22e+00 - 7.18e-01 1.00e+00f 1
3 6.7127651e+01 0.00e+00 6.28e-07 -1.2 7.90e-01 - 1.00e+00 1.00e+00f 1
4 5.6437844e+01 0.00e+00 2.21e-03 -2.0 1.86e+00 - 1.00e+00 9.53e-01f 1
5 5.5727100e+01 0.00e+00 3.43e-03 -7.8 1.88e-01 - 9.91e-01 8.26e-01f 1
6 5.5643835e+01 0.00e+00 1.31e-02 -4.4 3.30e-02 - 1.00e+00 7.77e-01f 1
7 5.5629288e+01 0.00e+00 5.37e-03 -5.1 1.07e-02 - 1.00e+00 8.69e-01f 1
8 5.5626581e+01 0.00e+00 7.28e-05 -6.2 5.04e-03 - 1.00e+00 9.97e-01f 1
9 5.5626338e+01 0.00e+00 1.02e-12 -7.0 1.46e-03 - 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) |d| lg(rg) alpha_du alpha_pr ls
10 5.5626323e+01 0.00e+00 5.13e-06 -11.9 2.14e-04 - 1.00e+00 9.95e-01f 1
11 5.5626323e+01 0.00e+00 6.58e-15 -13.7 5.83e-06 - 1.00e+00 1.00e+00h 1
12 5.5626323e+01 0.00e+00 4.52e-15 -17.0 2.89e-09 - 1.00e+00 1.00e+00h 1
13 5.5626323e+01 0.00e+00 5.75e-15 -17.3 1.36e-15 - 1.00e+00 1.00e+00 0
14 5.5626323e+01 5.02e-17 4.19e-15 -17.3 4.27e-16 - 1.00e+00 1.00e+00T 0
```
(changed by @svigerske at 2016-04-30 11:23:49)
|
1.0
|
MATLAB crashes when IPOPT converges to machine precision - Issue created by migration from Trac.
Original creator: drosos.kourounis
Original creation time: 2014-11-02 17:59:59
Assignee: ipopt-team
Version: 3.11
Dear all,
I observed, that MATLAB crashes when IPOPT converges very close or even beyond the machine precision. I tried that both with the mex binaries provided at the site version 3.11.8 and the mex files compiled manually following the instructions for version 3.11.9. Both Matlab R2012b and R2014a were tested. It seems that it is independent of any compilation details and appears only when some of the several tolerances (it is not clear which one) converges to or below machine precision. Below
I am running the examplelasso.m, where the tolerance was set to 1e-17 to force the reproduction of the bug. At iteration 14 MATLAB crashes so we do not see the iteration 15. The same thing happens for several different examples when the tolerance is set close to machine epsilon.
```
This is Ipopt version 3.11.9, running with linear solver ma57.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 32
Number of nonzeros in Lagrangian Hessian.............: 36
Total number of variables............................: 16
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 16
inequality constraints with only lower bounds: 16
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) |d| lg(rg) alpha_du alpha_pr ls
0 2.0362309e+03 0.00e+00 3.39e+01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.2058144e+03 0.00e+00 2.54e+01 -6.0 3.86e+00 - 2.12e-01 2.56e-01f 1
2 7.1857478e+01 0.00e+00 5.91e-01 0.1 7.22e+00 - 7.18e-01 1.00e+00f 1
3 6.7127651e+01 0.00e+00 6.28e-07 -1.2 7.90e-01 - 1.00e+00 1.00e+00f 1
4 5.6437844e+01 0.00e+00 2.21e-03 -2.0 1.86e+00 - 1.00e+00 9.53e-01f 1
5 5.5727100e+01 0.00e+00 3.43e-03 -7.8 1.88e-01 - 9.91e-01 8.26e-01f 1
6 5.5643835e+01 0.00e+00 1.31e-02 -4.4 3.30e-02 - 1.00e+00 7.77e-01f 1
7 5.5629288e+01 0.00e+00 5.37e-03 -5.1 1.07e-02 - 1.00e+00 8.69e-01f 1
8 5.5626581e+01 0.00e+00 7.28e-05 -6.2 5.04e-03 - 1.00e+00 9.97e-01f 1
9 5.5626338e+01 0.00e+00 1.02e-12 -7.0 1.46e-03 - 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) |d| lg(rg) alpha_du alpha_pr ls
10 5.5626323e+01 0.00e+00 5.13e-06 -11.9 2.14e-04 - 1.00e+00 9.95e-01f 1
11 5.5626323e+01 0.00e+00 6.58e-15 -13.7 5.83e-06 - 1.00e+00 1.00e+00h 1
12 5.5626323e+01 0.00e+00 4.52e-15 -17.0 2.89e-09 - 1.00e+00 1.00e+00h 1
13 5.5626323e+01 0.00e+00 5.75e-15 -17.3 1.36e-15 - 1.00e+00 1.00e+00 0
14 5.5626323e+01 5.02e-17 4.19e-15 -17.3 4.27e-16 - 1.00e+00 1.00e+00T 0
```
(changed by @svigerske at 2016-04-30 11:23:49)
|
defect
|
matlab crashes when ipopt converges to machine precision issue created by migration from trac original creator drosos kourounis original creation time assignee ipopt team version dear all i observed that matlab crashes when ipopt converges very close or even beyond the machine precision i tried that both with the mex binaries provided at the site version and the mex files compiled manually following the instructions for version both matlab and were tested it seems that it is independent of any compilation details and appears only when some of the several tolerances it is not clear which one converges to or below machine precision below i am running the examplelasso m where the tolerance was set to to force the reproduction of the bug at iteration matlab crashes so we do not see the iteration the same thing happens for several different examples when the tolerance is set close to machine epsilon this is ipopt version running with linear solver number of nonzeros in equality constraint jacobian number of nonzeros in inequality constraint jacobian number of nonzeros in lagrangian hessian total number of variables variables with only lower bounds variables with lower and upper bounds variables with only upper bounds total number of equality constraints total number of inequality constraints inequality constraints with only lower bounds inequality constraints with lower and upper bounds inequality constraints with only upper bounds iter objective inf pr inf du lg mu d lg rg alpha du alpha pr ls iter objective inf pr inf du lg mu d lg rg alpha du alpha pr ls changed by svigerske at
| 1
|
7,756
| 2,610,631,725
|
IssuesEvent
|
2015-02-26 21:32:02
|
alistairreilly/open-ig
|
https://api.github.com/repos/alistairreilly/open-ig
|
closed
|
Auto épület javítás
|
auto-migrated Priority-Medium Type-Defect
|
```
Game version: 0.92
Operating System: (e.g., Windows 7 x86, Windows XP 64-bit)
win7 64-bit
Java runtime version: (run java -version)
7
Installed using the Launcher? (yes, no)
yes
What steps will reproduce the problem?
1.Épületek automatikus javítás bekapcsolása
2.Akkor is fogy a credit, ha nincs sérült épület
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 14 Aug 2011 at 8:14
|
1.0
|
Auto épület javítás - ```
Game version: 0.92
Operating System: (e.g., Windows 7 x86, Windows XP 64-bit)
win7 64-bit
Java runtime version: (run java -version)
7
Installed using the Launcher? (yes, no)
yes
What steps will reproduce the problem?
1.Épületek automatikus javítás bekapcsolása
2.Akkor is fogy a credit, ha nincs sérült épület
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 14 Aug 2011 at 8:14
|
defect
|
auto épület javítás game version operating system e g windows windows xp bit bit java runtime version run java version installed using the launcher yes no yes what steps will reproduce the problem épületek automatikus javítás bekapcsolása akkor is fogy a credit ha nincs sérült épület original issue reported on code google com by jozsef t gmail com on aug at
| 1
|
47,698
| 13,066,113,367
|
IssuesEvent
|
2020-07-30 21:01:19
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
steamshovel results.i3 (preloaded for UWRF) Segmentation fault (Trac #1014)
|
Migrated from Trac combo core defect
|
students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.
Migrated from https://code.icecube.wisc.edu/ticket/1014
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"description": "students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.",
"reporter": "jdiercks",
"cc": "",
"resolution": "worksforme",
"_ts": "1458335650323600",
"component": "combo core",
"summary": "steamshovel results.i3 (preloaded for UWRF) Segmentation fault",
"priority": "normal",
"keywords": "",
"time": "2015-06-09T19:16:54",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
|
1.0
|
steamshovel results.i3 (preloaded for UWRF) Segmentation fault (Trac #1014) - students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.
Migrated from https://code.icecube.wisc.edu/ticket/1014
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"description": "students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.",
"reporter": "jdiercks",
"cc": "",
"resolution": "worksforme",
"_ts": "1458335650323600",
"component": "combo core",
"summary": "steamshovel results.i3 (preloaded for UWRF) Segmentation fault",
"priority": "normal",
"keywords": "",
"time": "2015-06-09T19:16:54",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
|
defect
|
steamshovel results preloaded for uwrf segmentation fault trac students at uwrf had preloaded a results for bootcamp and i m running into a repeatable segfault moving into count von count from any other frame both p and q and when moving out of count von count to any other frame tested on ubuntu migrated from json status closed changetime description students at uwrf had preloaded a results for bootcamp and i m running into a repeatable segfault moving into count von count from any other frame both p and q and when moving out of count von count to any other frame tested on ubuntu reporter jdiercks cc resolution worksforme ts component combo core summary steamshovel results preloaded for uwrf segmentation fault priority normal keywords time milestone owner hdembinski type defect
| 1
|
73,665
| 24,746,206,856
|
IssuesEvent
|
2022-10-21 09:54:06
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: Unable to instance ChromeDriver using `dotnet-script`
|
I-defect needs-triaging
|
### What happened?
Hello,
I'm using `Selenium.WebDriver` and `WebDriverManager` packages in a [dotnet script](https://github.com/dotnet-script/dotnet-script).
During the `ChromeDriver` initialization (when instancing it) and exception regarding a missing package is fired. In a normal console app this issue does not happen.
Steps:
- install `dotnet-script` tool
- paste the code in a text file with `.csx` extensions
- `dotnet-script .\thatScript.csx`
### How can we reproduce the issue?
```shell
#
#r "nuget: Selenium.WebDriver, 4.4.0"
#r "nuget: WebDriverManager, 2.15.0"
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Support.UI;
using WebDriverManager;
using WebDriverManager.DriverConfigs.Impl;
using WebDriverManager.Helpers;
//Let's log user in
// Instance chrome driver and the connection with it
ChromeOptions options = new ChromeOptions();
options.AddArgument("--no-sandbox");
options.AddArgument("--headless");
//If chrome instance is locally available (for example personal PC)
// Download chrome driver in order to be able use it
new DriverManager().SetUpDriver(new ChromeConfig(), VersionResolveStrategy.MatchingBrowser);
ChromeDriverService service = ChromeDriverService.CreateDefaultService();
service.EnableVerboseLogging = false;
service.SuppressInitialDiagnosticInformation = true;
service.HideCommandPromptWindow = true;
var driver = new ChromeDriver(service, options);
System.Console.WriteLine("Driver initialized");
```
### Relevant log output
```shell
System.IO.FileLoadException: Could not load file or assembly 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'. Could not find or load a specific file. (0x80131621)
File name: 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'
---> System.IO.FileLoadException: Could not load file or assembly 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'.
at System.Runtime.Loader.AssemblyLoadContext.LoadFromPath(IntPtr ptrNativeAssemblyLoadContext, String ilPath, String niPath, ObjectHandleOnStack retAssembly)
at System.Runtime.Loader.AssemblyLoadContext.LoadFromAssemblyPath(String assemblyPath)
at System.Reflection.Assembly.LoadFrom(String assemblyFile)
at System.Reflection.Assembly.LoadFromResolveHandler(Object sender, ResolveEventArgs args)
at System.Runtime.Loader.AssemblyLoadContext.InvokeResolveEvent(ResolveEventHandler eventHandler, RuntimeAssembly assembly, String name)
```
### Operating System
Windows 11
### Selenium version
C# 4.4.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 106.0.5249.119
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 106.0.5249.61
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: Unable to instance ChromeDriver using `dotnet-script` - ### What happened?
Hello,
I'm using `Selenium.WebDriver` and `WebDriverManager` packages in a [dotnet script](https://github.com/dotnet-script/dotnet-script).
During the `ChromeDriver` initialization (when instancing it) and exception regarding a missing package is fired. In a normal console app this issue does not happen.
Steps:
- install `dotnet-script` tool
- paste the code in a text file with `.csx` extensions
- `dotnet-script .\thatScript.csx`
### How can we reproduce the issue?
```shell
#
#r "nuget: Selenium.WebDriver, 4.4.0"
#r "nuget: WebDriverManager, 2.15.0"
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Support.UI;
using WebDriverManager;
using WebDriverManager.DriverConfigs.Impl;
using WebDriverManager.Helpers;
//Let's log user in
// Instance chrome driver and the connection with it
ChromeOptions options = new ChromeOptions();
options.AddArgument("--no-sandbox");
options.AddArgument("--headless");
//If chrome instance is locally available (for example personal PC)
// Download chrome driver in order to be able use it
new DriverManager().SetUpDriver(new ChromeConfig(), VersionResolveStrategy.MatchingBrowser);
ChromeDriverService service = ChromeDriverService.CreateDefaultService();
service.EnableVerboseLogging = false;
service.SuppressInitialDiagnosticInformation = true;
service.HideCommandPromptWindow = true;
var driver = new ChromeDriver(service, options);
System.Console.WriteLine("Driver initialized");
```
### Relevant log output
```shell
System.IO.FileLoadException: Could not load file or assembly 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'. Could not find or load a specific file. (0x80131621)
File name: 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'
---> System.IO.FileLoadException: Could not load file or assembly 'Newtonsoft.Json, Version=13.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'.
at System.Runtime.Loader.AssemblyLoadContext.LoadFromPath(IntPtr ptrNativeAssemblyLoadContext, String ilPath, String niPath, ObjectHandleOnStack retAssembly)
at System.Runtime.Loader.AssemblyLoadContext.LoadFromAssemblyPath(String assemblyPath)
at System.Reflection.Assembly.LoadFrom(String assemblyFile)
at System.Reflection.Assembly.LoadFromResolveHandler(Object sender, ResolveEventArgs args)
at System.Runtime.Loader.AssemblyLoadContext.InvokeResolveEvent(ResolveEventHandler eventHandler, RuntimeAssembly assembly, String name)
```
### Operating System
Windows 11
### Selenium version
C# 4.4.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 106.0.5249.119
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 106.0.5249.61
### Are you using Selenium Grid?
_No response_
|
defect
|
unable to instance chromedriver using dotnet script what happened hello i m using selenium webdriver and webdrivermanager packages in a during the chromedriver initialization when instancing it and exception regarding a missing package is fired in a normal console app this issue does not happen steps install dotnet script tool paste the code in a text file with csx extensions dotnet script thatscript csx how can we reproduce the issue shell r nuget selenium webdriver r nuget webdrivermanager using openqa selenium using openqa selenium chrome using openqa selenium support ui using webdrivermanager using webdrivermanager driverconfigs impl using webdrivermanager helpers let s log user in instance chrome driver and the connection with it chromeoptions options new chromeoptions options addargument no sandbox options addargument headless if chrome instance is locally available for example personal pc download chrome driver in order to be able use it new drivermanager setupdriver new chromeconfig versionresolvestrategy matchingbrowser chromedriverservice service chromedriverservice createdefaultservice service enableverboselogging false service suppressinitialdiagnosticinformation true service hidecommandpromptwindow true var driver new chromedriver service options system console writeline driver initialized relevant log output shell system io fileloadexception could not load file or assembly newtonsoft json version culture neutral publickeytoken could not find or load a specific file file name newtonsoft json version culture neutral publickeytoken system io fileloadexception could not load file or assembly newtonsoft json version culture neutral publickeytoken at system runtime loader assemblyloadcontext loadfrompath intptr ptrnativeassemblyloadcontext string ilpath string nipath objecthandleonstack retassembly at system runtime loader assemblyloadcontext loadfromassemblypath string assemblypath at system reflection assembly loadfrom string assemblyfile at system reflection assembly loadfromresolvehandler object sender resolveeventargs args at system runtime loader assemblyloadcontext invokeresolveevent resolveeventhandler eventhandler runtimeassembly assembly string name operating system windows selenium version c what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response
| 1
|
27,866
| 5,116,249,484
|
IssuesEvent
|
2017-01-07 01:32:47
|
jccastillo0007/eFacturaT
|
https://api.github.com/repos/jccastillo0007/eFacturaT
|
opened
|
NO JALA EL KEY GENERATOR NI PARA FOLIOS, NI PARA LLAVES
|
bug defect
|
EN PRODUCCIÓN, NO SE QUE LE HICISTE ORA...
|
1.0
|
NO JALA EL KEY GENERATOR NI PARA FOLIOS, NI PARA LLAVES - EN PRODUCCIÓN, NO SE QUE LE HICISTE ORA...
|
defect
|
no jala el key generator ni para folios ni para llaves en producción no se que le hiciste ora
| 1
|
531,796
| 15,511,705,812
|
IssuesEvent
|
2021-03-12 00:07:07
|
rdsaliba/notorious-eng
|
https://api.github.com/repos/rdsaliba/notorious-eng
|
closed
|
Design Parameter Handling
|
Database High Priority
|
As a developer, I want to handle the architecture of Adding/Retrieving Model parameters.
Description:
Models have multiple parameters and each parameter has a different type. To achieve this we will change how the classifier is stored in the database, instead of storing the actual classifier, we want to store the model itself, the model will be able to use weka's internal getters and setters to handle all parameters manipulation. We will also need custom parameters classes and to keep track of the live and eval models in the db
Acceptance Criteria:
- [x] Implement a pattern to handle different types of parameters
- [x] Adjust the trained_model table to handle live and eval models
- [x] Adjust the server module to transition from using the classifier to using the model object when using the db
- [ ] Unit Tests
|
1.0
|
Design Parameter Handling - As a developer, I want to handle the architecture of Adding/Retrieving Model parameters.
Description:
Models have multiple parameters and each parameter has a different type. To achieve this we will change how the classifier is stored in the database, instead of storing the actual classifier, we want to store the model itself, the model will be able to use weka's internal getters and setters to handle all parameters manipulation. We will also need custom parameters classes and to keep track of the live and eval models in the db
Acceptance Criteria:
- [x] Implement a pattern to handle different types of parameters
- [x] Adjust the trained_model table to handle live and eval models
- [x] Adjust the server module to transition from using the classifier to using the model object when using the db
- [ ] Unit Tests
|
non_defect
|
design parameter handling as a developer i want to handle the architecture of adding retrieving model parameters description models have multiple parameters and each parameter has a different type to achieve this we will change how the classifier is stored in the database instead of storing the actual classifier we want to store the model itself the model will be able to use weka s internal getters and setters to handle all parameters manipulation we will also need custom parameters classes and to keep track of the live and eval models in the db acceptance criteria implement a pattern to handle different types of parameters adjust the trained model table to handle live and eval models adjust the server module to transition from using the classifier to using the model object when using the db unit tests
| 0
|
3,518
| 2,610,064,132
|
IssuesEvent
|
2015-02-26 18:18:52
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
黄岩检查不育哪里专业
|
auto-migrated Priority-Medium Type-Defect
|
```
黄岩检查不育哪里专业【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:02
|
1.0
|
黄岩检查不育哪里专业 - ```
黄岩检查不育哪里专业【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:02
|
defect
|
黄岩检查不育哪里专业 黄岩检查不育哪里专业【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
134,950
| 18,518,896,097
|
IssuesEvent
|
2021-10-20 13:14:08
|
vipinsun/fabric-sdk-go
|
https://api.github.com/repos/vipinsun/fabric-sdk-go
|
opened
|
CVE-2021-32804 (High) detected in tar-4.4.13.tgz
|
security vulnerability
|
## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/event_cc/package.json</p>
<p>Path to vulnerable library: fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/event_cc/node_modules/fabric-shim/node_modules/tar/package.json,fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/example_cc/node_modules/fabric-shim/node_modules/tar/package.json,fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/example_cc1/node_modules/fabric-shim/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- fabric-shim-1.4.6.tgz (Root Library)
- grpc-1.24.3.tgz
- node-pre-gyp-0.15.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vipinsun/fabric-sdk-go/commit/432a85aa9d4094d52823bdb4be9cf19758df85e1">432a85aa9d4094d52823bdb4be9cf19758df85e1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-32804 (High) detected in tar-4.4.13.tgz - ## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/event_cc/package.json</p>
<p>Path to vulnerable library: fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/event_cc/node_modules/fabric-shim/node_modules/tar/package.json,fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/example_cc/node_modules/fabric-shim/node_modules/tar/package.json,fabric-sdk-go/pkg/fab/ccpackager/nodepackager/testdata/example_cc1/node_modules/fabric-shim/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- fabric-shim-1.4.6.tgz (Root Library)
- grpc-1.24.3.tgz
- node-pre-gyp-0.15.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vipinsun/fabric-sdk-go/commit/432a85aa9d4094d52823bdb4be9cf19758df85e1">432a85aa9d4094d52823bdb4be9cf19758df85e1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file fabric sdk go pkg fab ccpackager nodepackager testdata event cc package json path to vulnerable library fabric sdk go pkg fab ccpackager nodepackager testdata event cc node modules fabric shim node modules tar package json fabric sdk go pkg fab ccpackager nodepackager testdata example cc node modules fabric shim node modules tar package json fabric sdk go pkg fab ccpackager nodepackager testdata example node modules fabric shim node modules tar package json dependency hierarchy fabric shim tgz root library grpc tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch main vulnerability details the npm package tar aka node tar before versions and has a arbitrary file creation overwrite vulnerability due to insufficient absolute path sanitization node tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservepaths flag is not set to true this is achieved by stripping the absolute path root from any absolute file paths contained in a tar file for example home user bashrc would turn into home user bashrc this logic was insufficient when file paths contained repeated path roots such as home user bashrc node tar would only strip a single path root from such paths when given an absolute file path with repeating path roots the resulting path e g home user bashrc would still resolve to an absolute path thus allowing arbitrary file creation and overwrite this issue was addressed in releases and users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry path or a filter method which removes entries with absolute paths see referenced github advisory for details be aware of cve which fixes a similar bug in later versions of tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource
| 0
|
41,285
| 10,354,398,005
|
IssuesEvent
|
2019-09-05 13:41:01
|
mwild1/luadbi
|
https://api.github.com/repos/mwild1/luadbi
|
closed
|
OSX binaries
|
Priority-Medium Type-Defect auto-migrated
|
```
Building OSX binaries is not easy. Does anyone have 32/64 bit Intel binaries
ready for download (mysql, postgre, sqlite)?
```
Original issue reported on code.google.com by `patest...@gmail.com` on 24 Jan 2013 at 2:29
|
1.0
|
OSX binaries - ```
Building OSX binaries is not easy. Does anyone have 32/64 bit Intel binaries
ready for download (mysql, postgre, sqlite)?
```
Original issue reported on code.google.com by `patest...@gmail.com` on 24 Jan 2013 at 2:29
|
defect
|
osx binaries building osx binaries is not easy does anyone have bit intel binaries ready for download mysql postgre sqlite original issue reported on code google com by patest gmail com on jan at
| 1
|
10,255
| 8,453,183,609
|
IssuesEvent
|
2018-10-20 13:11:37
|
ClangBuiltLinux/linux
|
https://api.github.com/repos/ClangBuiltLinux/linux
|
closed
|
auto update fork
|
infrastructure
|
since this is not a hard fork, it would be good to automate pulling updates from upstream. A quick search looks like https://backstroke.co/ is a service that may help.
|
1.0
|
auto update fork - since this is not a hard fork, it would be good to automate pulling updates from upstream. A quick search looks like https://backstroke.co/ is a service that may help.
|
non_defect
|
auto update fork since this is not a hard fork it would be good to automate pulling updates from upstream a quick search looks like is a service that may help
| 0
|
67,660
| 21,042,917,788
|
IssuesEvent
|
2022-03-31 13:48:24
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Threads: Rooms set to mentions only don't bump in "unreadness" with replies to a thread
|
T-Defect S-Major O-Occasional A-Threads Z-Labs
|
### Steps to reproduce
1. Have threads enabled
2. Engage with threads
3. Engage with other rooms, while others continue to engage with threads
4. Note the room not surfacing as unread
### Outcome
#### What did you expect?
The unread dot on the room when any reply to any thread is made. There also seems to be a bug where the last activity timestamp for the room is not being updated properly, which may be related, so the room list isn't surfacing the room as "more active than previously", so can be lost in the sea of rooms.
#### What happened instead?
The room's unread status was unchanged. Cause of "mentions only" determined from internal chat.
### Operating system
Windows 10
### Application version
Nightly (2021-03-27)
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
No
|
1.0
|
Threads: Rooms set to mentions only don't bump in "unreadness" with replies to a thread - ### Steps to reproduce
1. Have threads enabled
2. Engage with threads
3. Engage with other rooms, while others continue to engage with threads
4. Note the room not surfacing as unread
### Outcome
#### What did you expect?
The unread dot on the room when any reply to any thread is made. There also seems to be a bug where the last activity timestamp for the room is not being updated properly, which may be related, so the room list isn't surfacing the room as "more active than previously", so can be lost in the sea of rooms.
#### What happened instead?
The room's unread status was unchanged. Cause of "mentions only" determined from internal chat.
### Operating system
Windows 10
### Application version
Nightly (2021-03-27)
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
No
|
defect
|
threads rooms set to mentions only don t bump in unreadness with replies to a thread steps to reproduce have threads enabled engage with threads engage with other rooms while others continue to engage with threads note the room not surfacing as unread outcome what did you expect the unread dot on the room when any reply to any thread is made there also seems to be a bug where the last activity timestamp for the room is not being updated properly which may be related so the room list isn t surfacing the room as more active than previously so can be lost in the sea of rooms what happened instead the room s unread status was unchanged cause of mentions only determined from internal chat operating system windows application version nightly how did you install the app the internet homeserver io will you send logs no
| 1
|
125,065
| 16,706,136,971
|
IssuesEvent
|
2021-06-09 10:10:33
|
microsoft/nni
|
https://api.github.com/repos/microsoft/nni
|
closed
|
Advancing user control for trials
|
design discussion nnidev
|
This is the 3rd advanced feature from multi phase/trial discussion.
The goal of this issue is to provide more user controlled capabilities with get_next_parameter loop: concurrency, multi-phase.
|
1.0
|
Advancing user control for trials - This is the 3rd advanced feature from multi phase/trial discussion.
The goal of this issue is to provide more user controlled capabilities with get_next_parameter loop: concurrency, multi-phase.
|
non_defect
|
advancing user control for trials this is the advanced feature from multi phase trial discussion the goal of this issue is to provide more user controlled capabilities with get next parameter loop concurrency multi phase
| 0
|
7,945
| 11,137,526,713
|
IssuesEvent
|
2019-12-20 19:36:11
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Update modal - update the text to provide more information
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
New content:
4. Review your application and click **Submit application**. You must submit changes before [insert closing date] at 11:59 p.m. EST.
Current Screen Shot:

|
1.0
|
Update modal - update the text to provide more information - Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
New content:
4. Review your application and click **Submit application**. You must submit changes before [insert closing date] at 11:59 p.m. EST.
Current Screen Shot:

|
non_defect
|
update modal update the text to provide more information who applicants what update the update application modal why in order to provide additional information acceptance criteria update the update application modal to provide more information related to closing time est and if you update make sure you submit new content review your application and click submit application you must submit changes before at p m est current screen shot
| 0
|
263,653
| 28,047,915,069
|
IssuesEvent
|
2023-03-29 01:35:40
|
kapseliboi/hybrixd
|
https://api.github.com/repos/kapseliboi/hybrixd
|
closed
|
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2021-23440 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p>
<p>Path to dependency file: /modules/transport/torrent/peer-network-fork/package.json</p>
<p>Path to vulnerable library: /modules/transport/torrent/peer-network-fork/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- documentation-13.2.5.tgz (Root Library)
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.
Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440.
<p>Publish Date: 2021-09-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-12</p>
<p>Fix Resolution (set-value): 4.0.1</p>
<p>Direct dependency fix Resolution (documentation): 14.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed - ## CVE-2021-23440 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p>
<p>Path to dependency file: /modules/transport/torrent/peer-network-fork/package.json</p>
<p>Path to vulnerable library: /modules/transport/torrent/peer-network-fork/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- documentation-13.2.5.tgz (Root Library)
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.
Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440.
<p>Publish Date: 2021-09-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-12</p>
<p>Fix Resolution (set-value): 4.0.1</p>
<p>Direct dependency fix Resolution (documentation): 14.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in set value tgz autoclosed cve high severity vulnerability vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file modules transport torrent peer network fork package json path to vulnerable library modules transport torrent peer network fork node modules set value package json dependency hierarchy documentation tgz root library micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library found in base branch master vulnerability details this affects the package set value before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays mend note after conducting further research mend has determined that all versions of set value up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution set value direct dependency fix resolution documentation step up your open source security game with mend
| 0
|
113,071
| 24,361,421,126
|
IssuesEvent
|
2022-10-03 12:03:02
|
AMastryukov/LD51
|
https://api.github.com/repos/AMastryukov/LD51
|
opened
|
Create Main Game Level
|
High Priority Medium Priority Code
|
**Description**
Stitch together all existing functionality into one main gameplay level
**Acceptance Criteria**
The game is playable from start to finish
The game lighting looks acceaptable
The game has a working timer
The game spawns enemy waves every 10 seconds
The game swaps player loadout every 10 seconds
The game has music
The player can kill enemies
The enemies can kill the player
**Subtasks**
- [ ] Create new level scene and copy existing level scene into it
- [ ] Fix various issues with the prop positions (ceiling, etc)
- [ ] Add lighting
- [ ] Add enemy spawns
- [ ] Add HUD
- [ ] Add loadout management
- [ ] Add music
- [ ] Add sound effects
|
1.0
|
Create Main Game Level - **Description**
Stitch together all existing functionality into one main gameplay level
**Acceptance Criteria**
The game is playable from start to finish
The game lighting looks acceaptable
The game has a working timer
The game spawns enemy waves every 10 seconds
The game swaps player loadout every 10 seconds
The game has music
The player can kill enemies
The enemies can kill the player
**Subtasks**
- [ ] Create new level scene and copy existing level scene into it
- [ ] Fix various issues with the prop positions (ceiling, etc)
- [ ] Add lighting
- [ ] Add enemy spawns
- [ ] Add HUD
- [ ] Add loadout management
- [ ] Add music
- [ ] Add sound effects
|
non_defect
|
create main game level description stitch together all existing functionality into one main gameplay level acceptance criteria the game is playable from start to finish the game lighting looks acceaptable the game has a working timer the game spawns enemy waves every seconds the game swaps player loadout every seconds the game has music the player can kill enemies the enemies can kill the player subtasks create new level scene and copy existing level scene into it fix various issues with the prop positions ceiling etc add lighting add enemy spawns add hud add loadout management add music add sound effects
| 0
|
326,157
| 27,977,629,838
|
IssuesEvent
|
2023-03-25 19:33:08
|
Tadukooverse/TadukooForm
|
https://api.github.com/repos/Tadukooverse/TadukooForm
|
closed
|
[TESTING] Switch FormField tests to use FormFieldTest
|
Testing Tadukoo Form Tadukoo Form Fields Tadukoo Form Components
|
**What change would you like to see?**
Using the new FormFieldTest as the base class of FormField tests
**How does this change help?**
Makes it simpler when there are changes to the base FormField class to test everything
**Additional context**
N/A
|
1.0
|
[TESTING] Switch FormField tests to use FormFieldTest - **What change would you like to see?**
Using the new FormFieldTest as the base class of FormField tests
**How does this change help?**
Makes it simpler when there are changes to the base FormField class to test everything
**Additional context**
N/A
|
non_defect
|
switch formfield tests to use formfieldtest what change would you like to see using the new formfieldtest as the base class of formfield tests how does this change help makes it simpler when there are changes to the base formfield class to test everything additional context n a
| 0
|
51,314
| 13,207,430,220
|
IssuesEvent
|
2020-08-14 23:04:22
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
pnf to Icetray v3: check v3 thread safety (Trac #196)
|
Incomplete Migration Migrated from Trac defect jeb + pnf
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/196">https://code.icecube.wisc.edu/projects/icecube/ticket/196</a>, reported by blaufussand owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"_ts": "1416713877066511",
"description": "Check for any non-thread-safe stuff in IceTray v3 and clean them\nup for pnf support.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2010-03-01T16:57:26",
"component": "jeb + pnf",
"summary": "pnf to Icetray v3: check v3 thread safety",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
pnf to Icetray v3: check v3 thread safety (Trac #196) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/196">https://code.icecube.wisc.edu/projects/icecube/ticket/196</a>, reported by blaufussand owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"_ts": "1416713877066511",
"description": "Check for any non-thread-safe stuff in IceTray v3 and clean them\nup for pnf support.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2010-03-01T16:57:26",
"component": "jeb + pnf",
"summary": "pnf to Icetray v3: check v3 thread safety",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
|
defect
|
pnf to icetray check thread safety trac migrated from json status closed changetime ts description check for any non thread safe stuff in icetray and clean them nup for pnf support reporter blaufuss cc resolution fixed time component jeb pnf summary pnf to icetray check thread safety priority normal keywords milestone owner tschmidt type defect
| 1
|
24,241
| 3,933,419,153
|
IssuesEvent
|
2016-04-25 19:03:24
|
UNH-OE/wave-tow-tank
|
https://api.github.com/repos/UNH-OE/wave-tow-tank
|
opened
|
End timing belt cover missing screw
|
defect
|
Looks like it was tough to get in (sometimes you can't do them sequentially so you have enough leverage to bend the ends). Should be able to get it with a little clamp though:

|
1.0
|
End timing belt cover missing screw - Looks like it was tough to get in (sometimes you can't do them sequentially so you have enough leverage to bend the ends). Should be able to get it with a little clamp though:

|
defect
|
end timing belt cover missing screw looks like it was tough to get in sometimes you can t do them sequentially so you have enough leverage to bend the ends should be able to get it with a little clamp though
| 1
|
28,560
| 5,290,842,364
|
IssuesEvent
|
2017-02-08 20:56:52
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
Assert raises test failure for test_ill_condition_warning
|
defect scipy.linalg
|
Testing against numpy 1.8.2, I get the following test error for master:
```
ERROR: test_ill_condition_warning (test_basic.TestSolve)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/venv/local/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 714, in test_ill_condition_warning
assert_raises(RuntimeWarning, solve, a, b)
File "/venv/local/lib/python2.7/site-packages/numpy/testing/utils.py", line 1020, in assert_raises
return nose.tools.assert_raises(*args,**kwargs)
File "/usr/lib/python2.7/unittest/case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
File "/venv/local/lib/python2.7/site-packages/scipy/linalg/basic.py", line 219, in solve
raise LinAlgError('Matrix is singular.')
LinAlgError: Matrix is singular.
```
https://travis-ci.org/MacPython/scipy-wheels/jobs/195612819#L488
Should `LinAlgError` also be allowed for this case?
|
1.0
|
Assert raises test failure for test_ill_condition_warning - Testing against numpy 1.8.2, I get the following test error for master:
```
ERROR: test_ill_condition_warning (test_basic.TestSolve)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/venv/local/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 714, in test_ill_condition_warning
assert_raises(RuntimeWarning, solve, a, b)
File "/venv/local/lib/python2.7/site-packages/numpy/testing/utils.py", line 1020, in assert_raises
return nose.tools.assert_raises(*args,**kwargs)
File "/usr/lib/python2.7/unittest/case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
File "/venv/local/lib/python2.7/site-packages/scipy/linalg/basic.py", line 219, in solve
raise LinAlgError('Matrix is singular.')
LinAlgError: Matrix is singular.
```
https://travis-ci.org/MacPython/scipy-wheels/jobs/195612819#L488
Should `LinAlgError` also be allowed for this case?
|
defect
|
assert raises test failure for test ill condition warning testing against numpy i get the following test error for master error test ill condition warning test basic testsolve traceback most recent call last file venv local lib site packages scipy linalg tests test basic py line in test ill condition warning assert raises runtimewarning solve a b file venv local lib site packages numpy testing utils py line in assert raises return nose tools assert raises args kwargs file usr lib unittest case py line in assertraises callableobj args kwargs file venv local lib site packages scipy linalg basic py line in solve raise linalgerror matrix is singular linalgerror matrix is singular should linalgerror also be allowed for this case
| 1
|
510,506
| 14,791,876,119
|
IssuesEvent
|
2021-01-12 14:04:19
|
canonical-web-and-design/charmhub.io
|
https://api.github.com/repos/canonical-web-and-design/charmhub.io
|
closed
|
Summaries for charms missing, javascript console error
|
Priority: Medium
|
I think the summary is missing from the charm pages themselves - I see `index.js:35 Uncaught Error: There are no elements containing [data-js='summary'] selector.` in the javascript console
|
1.0
|
Summaries for charms missing, javascript console error - I think the summary is missing from the charm pages themselves - I see `index.js:35 Uncaught Error: There are no elements containing [data-js='summary'] selector.` in the javascript console
|
non_defect
|
summaries for charms missing javascript console error i think the summary is missing from the charm pages themselves i see index js uncaught error there are no elements containing selector in the javascript console
| 0
|
11,152
| 2,641,228,104
|
IssuesEvent
|
2015-03-11 16:40:02
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Text clipped on Web 2.0 Logo Creator slide
|
Priority-Medium Type-Defect
|
Original [issue 13](https://code.google.com/p/html5rocks/issues/detail?id=13) created by chrsmith on 2010-06-22T23:43:39.000Z:
When the "text-shadow" slider on http://slides.html5rocks.com/#slide40 is set to 0, the "g" in "Logo" is clipped at the bottom.
This is probably a Chrome/WebKit bug.
|
1.0
|
Text clipped on Web 2.0 Logo Creator slide - Original [issue 13](https://code.google.com/p/html5rocks/issues/detail?id=13) created by chrsmith on 2010-06-22T23:43:39.000Z:
When the "text-shadow" slider on http://slides.html5rocks.com/#slide40 is set to 0, the "g" in "Logo" is clipped at the bottom.
This is probably a Chrome/WebKit bug.
|
defect
|
text clipped on web logo creator slide original created by chrsmith on when the quot text shadow quot slider on is set to the quot g quot in quot logo quot is clipped at the bottom this is probably a chrome webkit bug
| 1
|
27,517
| 11,496,360,421
|
IssuesEvent
|
2020-02-12 07:45:52
|
Kixunil/cryptoanarchy-deb-repo-builder
|
https://api.github.com/repos/Kixunil/cryptoanarchy-deb-repo-builder
|
opened
|
Security tests
|
enhancement security improvement
|
It'd be great to test security properties of the built package. Here are some ideas:
`readable_whitelist_*` contains whitelisted readable objects such as `^/var/lib/bitcoin-mainnet/`
```bash
services=bitcoin-mainnet bitcoin-rpc-proxy-mainnet ...
for service in services;
do
sed 's/ExecStart=.*$/ExecStart=find \/ -readable > \/tmp\/sec_test_readable_'$service'/' /usr/lib/systemd/system/$service.service | sed 's/Type=.*$/Type=oneshot/' > /etc/systemd/system/security_test_$service.service
systemctl daemon-reload
systemctl start security_test_$service.service
grep -vf readable_whitelist_$service /tmp/sec_test_readable_$service > /tmp_security_result_$service
if [ `wc -l < /tmp_security_result_$service` -gt 0 ];
then
echo "Too many readable files for $service"
failure = 1
fi
done
```
Same as above for writable, can be in a single loop.
|
True
|
Security tests - It'd be great to test security properties of the built package. Here are some ideas:
`readable_whitelist_*` contains whitelisted readable objects such as `^/var/lib/bitcoin-mainnet/`
```bash
services=bitcoin-mainnet bitcoin-rpc-proxy-mainnet ...
for service in services;
do
sed 's/ExecStart=.*$/ExecStart=find \/ -readable > \/tmp\/sec_test_readable_'$service'/' /usr/lib/systemd/system/$service.service | sed 's/Type=.*$/Type=oneshot/' > /etc/systemd/system/security_test_$service.service
systemctl daemon-reload
systemctl start security_test_$service.service
grep -vf readable_whitelist_$service /tmp/sec_test_readable_$service > /tmp_security_result_$service
if [ `wc -l < /tmp_security_result_$service` -gt 0 ];
then
echo "Too many readable files for $service"
failure = 1
fi
done
```
Same as above for writable, can be in a single loop.
|
non_defect
|
security tests it d be great to test security properties of the built package here are some ideas readable whitelist contains whitelisted readable objects such as var lib bitcoin mainnet bash services bitcoin mainnet bitcoin rpc proxy mainnet for service in services do sed s execstart execstart find readable tmp sec test readable service usr lib systemd system service service sed s type type oneshot etc systemd system security test service service systemctl daemon reload systemctl start security test service service grep vf readable whitelist service tmp sec test readable service tmp security result service if then echo too many readable files for service failure fi done same as above for writable can be in a single loop
| 0
|
13,381
| 2,754,988,853
|
IssuesEvent
|
2015-04-26 07:02:25
|
joeha480/dotify
|
https://api.github.com/repos/joeha480/dotify
|
closed
|
Handle line endings in repo
|
auto-migrated Priority-Medium Type-Defect
|
```
Windows line endings are present in many files in the repo and the git
migration added lots of -text entries in gitattributes for existing files.
This causes problems in cross platform and CI cases.
```
Original issue reported on code.google.com by `joel.hak...@mtm.se` on 9 Mar 2015 at 8:37
|
1.0
|
Handle line endings in repo - ```
Windows line endings are present in many files in the repo and the git
migration added lots of -text entries in gitattributes for existing files.
This causes problems in cross platform and CI cases.
```
Original issue reported on code.google.com by `joel.hak...@mtm.se` on 9 Mar 2015 at 8:37
|
defect
|
handle line endings in repo windows line endings are present in many files in the repo and the git migration added lots of text entries in gitattributes for existing files this causes problems in cross platform and ci cases original issue reported on code google com by joel hak mtm se on mar at
| 1
|
48,751
| 13,184,729,855
|
IssuesEvent
|
2020-08-12 19:59:21
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
DeleteUnregistered doesn't catch unregistered_class exceptions... (Trac #137)
|
IceTray Incomplete Migration Migrated from Trac defect
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/137
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T21:08:15",
"description": "it catches std::exception. This should be more fine-grained.",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1547240895681654",
"component": "IceTray",
"summary": "DeleteUnregistered doesn't catch unregistered_class exceptions...",
"priority": "normal",
"keywords": "",
"time": "2008-10-01T01:08:30",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
DeleteUnregistered doesn't catch unregistered_class exceptions... (Trac #137) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/137
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T21:08:15",
"description": "it catches std::exception. This should be more fine-grained.",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1547240895681654",
"component": "IceTray",
"summary": "DeleteUnregistered doesn't catch unregistered_class exceptions...",
"priority": "normal",
"keywords": "",
"time": "2008-10-01T01:08:30",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
deleteunregistered doesn t catch unregistered class exceptions trac migrated from reported by troy and owned by troy json status closed changetime description it catches std exception this should be more fine grained reporter troy cc resolution fixed ts component icetray summary deleteunregistered doesn t catch unregistered class exceptions priority normal keywords time milestone owner troy type defect
| 1
|
66,347
| 6,996,775,690
|
IssuesEvent
|
2017-12-16 05:12:49
|
TEAMMATES/teammates
|
https://api.github.com/repos/TEAMMATES/teammates
|
closed
|
Remove MashupPageUiTest
|
a-Testing d.Contributors e.2 p.Low s.OnHold
|
The mashup page is badly broken and the test is not used for its intended purpose anyway. Lets get rid of it.
|
1.0
|
Remove MashupPageUiTest - The mashup page is badly broken and the test is not used for its intended purpose anyway. Lets get rid of it.
|
non_defect
|
remove mashuppageuitest the mashup page is badly broken and the test is not used for its intended purpose anyway lets get rid of it
| 0
|
799,711
| 28,312,428,978
|
IssuesEvent
|
2023-04-10 16:35:06
|
CrowdDotDev/crowd.dev
|
https://api.github.com/repos/CrowdDotDev/crowd.dev
|
closed
|
[C-964] Missing to parse member's bio in Merge Members Suggestions page
|
Bug Medium priority
|
**Problem**
Currently, in the Merge Members Suggestions page, the member's bio is not being parsed as html. It's displaying the content as plain text.
**Solution**
In `member-merge-suggestions-page.vue` the code to display `scope.row.attributes.bio` should be using the newly created component `member-bio.vue` . This one has the logic to parse the content.
Suggestion: I think we should also limit the number of characters that is displayed (I believe this is not implemented right now). For example, we could use a "Show more"/"Show less" button below the bio to expand/collapse content if it overflows x characters
<sub>From [SyncLinear.com](https://synclinear.com) | [C-964](https://linear.app/crowddotdev/issue/C-964/missing-to-parse-members-bio-in-merge-members-suggestions-page)</sub>
|
1.0
|
[C-964] Missing to parse member's bio in Merge Members Suggestions page - **Problem**
Currently, in the Merge Members Suggestions page, the member's bio is not being parsed as html. It's displaying the content as plain text.
**Solution**
In `member-merge-suggestions-page.vue` the code to display `scope.row.attributes.bio` should be using the newly created component `member-bio.vue` . This one has the logic to parse the content.
Suggestion: I think we should also limit the number of characters that is displayed (I believe this is not implemented right now). For example, we could use a "Show more"/"Show less" button below the bio to expand/collapse content if it overflows x characters
<sub>From [SyncLinear.com](https://synclinear.com) | [C-964](https://linear.app/crowddotdev/issue/C-964/missing-to-parse-members-bio-in-merge-members-suggestions-page)</sub>
|
non_defect
|
missing to parse member s bio in merge members suggestions page problem currently in the merge members suggestions page the member s bio is not being parsed as html it s displaying the content as plain text solution in member merge suggestions page vue the code to display scope row attributes bio should be using the newly created component member bio vue this one has the logic to parse the content suggestion i think we should also limit the number of characters that is displayed i believe this is not implemented right now for example we could use a show more show less button below the bio to expand collapse content if it overflows x characters from
| 0
|
70,710
| 3,338,268,261
|
IssuesEvent
|
2015-11-13 05:52:53
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
openshift/hello-openshift:latest does not work on older systems
|
priority/P2
|
On some platforms (eg, Fedora 21), the openshift/hello-openshift:latest image does not run:
> docker run openshift/hello-openshift
no such file or directory
Error response from daemon: Cannot start container d830a0c9ac13e57e3765d8b3ab4089bad9137e00c7a98eec1457073c5cc057d8: [8] System error: no such file or directory
The journal doesn't tell much, even with "-D":
docker[27514]: level=info msg="POST /v1.20/containers/5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0/start"
docker[27514]: level=debug msg="activateDeviceIfNeeded(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=warning msg="exit status 1"
docker[27514]: level=debug msg="attach: stdout: end"
docker[27514]: level=debug msg="attach: stderr: end"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] Unmount(/var/lib/docker/devicemapper/mnt/5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] Unmount done"
docker[27514]: level=debug msg="[devmapper] deactivateDevice(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] removeDevice START(docker-253:1-786882-5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] removeDevice END(docker-253:1-786882-5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] deactivateDevice END(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0) END"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0) END"
docker[27514]: level=error msg="Error unmounting device 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: UnmountDevice: device not-mounted id 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0"
docker[27514]: level=error msg="Handler for POST /containers/{name:.*}/start returned error: Cannot start container 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: [8] System error: no such file or directory"
docker[27514]: level=error msg="HTTP Error" err="Cannot start container 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: [8] System error: no such file or directory" statusCode=404
openshift/hello-openshift:v1.0.6 works fine.
|
1.0
|
openshift/hello-openshift:latest does not work on older systems - On some platforms (eg, Fedora 21), the openshift/hello-openshift:latest image does not run:
> docker run openshift/hello-openshift
no such file or directory
Error response from daemon: Cannot start container d830a0c9ac13e57e3765d8b3ab4089bad9137e00c7a98eec1457073c5cc057d8: [8] System error: no such file or directory
The journal doesn't tell much, even with "-D":
docker[27514]: level=info msg="POST /v1.20/containers/5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0/start"
docker[27514]: level=debug msg="activateDeviceIfNeeded(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=warning msg="exit status 1"
docker[27514]: level=debug msg="attach: stdout: end"
docker[27514]: level=debug msg="attach: stderr: end"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] Unmount(/var/lib/docker/devicemapper/mnt/5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] Unmount done"
docker[27514]: level=debug msg="[devmapper] deactivateDevice(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] removeDevice START(docker-253:1-786882-5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] removeDevice END(docker-253:1-786882-5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] deactivateDevice END(5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0) END"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0)"
docker[27514]: level=debug msg="[devmapper] UnmountDevice(hash=5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0) END"
docker[27514]: level=error msg="Error unmounting device 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: UnmountDevice: device not-mounted id 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0"
docker[27514]: level=error msg="Handler for POST /containers/{name:.*}/start returned error: Cannot start container 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: [8] System error: no such file or directory"
docker[27514]: level=error msg="HTTP Error" err="Cannot start container 5c6b8abab42717bcabb0fdac30c96025eac3f4537517b04ad894e1129c2b49b0: [8] System error: no such file or directory" statusCode=404
openshift/hello-openshift:v1.0.6 works fine.
|
non_defect
|
openshift hello openshift latest does not work on older systems on some platforms eg fedora the openshift hello openshift latest image does not run docker run openshift hello openshift no such file or directory error response from daemon cannot start container system error no such file or directory the journal doesn t tell much even with d docker level info msg post containers start docker level debug msg activatedeviceifneeded docker level warning msg exit status docker level debug msg attach stdout end docker level debug msg attach stderr end docker level debug msg unmountdevice hash docker level debug msg unmount var lib docker devicemapper mnt docker level debug msg unmount done docker level debug msg deactivatedevice docker level debug msg removedevice start docker docker level debug msg removedevice end docker docker level debug msg deactivatedevice end docker level debug msg unmountdevice hash end docker level debug msg unmountdevice hash docker level debug msg unmountdevice hash end docker level error msg error unmounting device unmountdevice device not mounted id docker level error msg handler for post containers name start returned error cannot start container system error no such file or directory docker level error msg http error err cannot start container system error no such file or directory statuscode openshift hello openshift works fine
| 0
|
467,635
| 13,451,829,082
|
IssuesEvent
|
2020-09-08 20:57:04
|
Reckue/post-api
|
https://api.github.com/repos/Reckue/post-api
|
opened
|
Create status validation for nodes
|
priority:normal type:task
|
Please, create status validation for nodes in create and update methods of NodeServiceRealization.
For example, when the node has created or updated it should receive automatically status MODERATED. And it would not be necessary to generate it yourself in a field while the node is being created or updated.
Currently, the NodeRequest does not have a "status" field, but it is filled with "null".
Please, also delete the field "status" from PostRequest (it has already generated automatically).
When you fix it, please describe it in already existed docs.
|
1.0
|
Create status validation for nodes - Please, create status validation for nodes in create and update methods of NodeServiceRealization.
For example, when the node has created or updated it should receive automatically status MODERATED. And it would not be necessary to generate it yourself in a field while the node is being created or updated.
Currently, the NodeRequest does not have a "status" field, but it is filled with "null".
Please, also delete the field "status" from PostRequest (it has already generated automatically).
When you fix it, please describe it in already existed docs.
|
non_defect
|
create status validation for nodes please create status validation for nodes in create and update methods of nodeservicerealization for example when the node has created or updated it should receive automatically status moderated and it would not be necessary to generate it yourself in a field while the node is being created or updated currently the noderequest does not have a status field but it is filled with null please also delete the field status from postrequest it has already generated automatically when you fix it please describe it in already existed docs
| 0
|
18,863
| 4,318,673,942
|
IssuesEvent
|
2016-07-24 06:17:30
|
webpack/webpack.io
|
https://api.github.com/repos/webpack/webpack.io
|
closed
|
Support - Common Problems
|
documentation enhancement
|
[Stub](https://github.com/webpack/webpack.io/blob/master/src/content/support/common-problems.md).
Feel free to comment here if you have ideas on what this guide should cover. Link to potential resources too.
|
1.0
|
Support - Common Problems - [Stub](https://github.com/webpack/webpack.io/blob/master/src/content/support/common-problems.md).
Feel free to comment here if you have ideas on what this guide should cover. Link to potential resources too.
|
non_defect
|
support common problems feel free to comment here if you have ideas on what this guide should cover link to potential resources too
| 0
|
34,415
| 14,400,706,293
|
IssuesEvent
|
2020-12-03 12:45:41
|
astrolabsoftware/fink-broker
|
https://api.github.com/repos/astrolabsoftware/fink-broker
|
closed
|
[raw2science] Compute DC_mag and DC_err before running science modules
|
apache spark performance services
|
**Describe the issue**
ZTF provides PSF magnitude and error, but many science modules make use of DC magnitude and DC error. Currently, the conversion between one to another is done inside each science module that requires these quantities - so this is a redundant computation.
Action item: pre-compute DC mag and DC error before (pandas_udf), and use it directly for each science module (hence both raw2science.py and science modules in fink-science need to be modified). Do not forget to drop these quantities after all science modules as they must not be redistributed.
|
1.0
|
[raw2science] Compute DC_mag and DC_err before running science modules - **Describe the issue**
ZTF provides PSF magnitude and error, but many science modules make use of DC magnitude and DC error. Currently, the conversion between one to another is done inside each science module that requires these quantities - so this is a redundant computation.
Action item: pre-compute DC mag and DC error before (pandas_udf), and use it directly for each science module (hence both raw2science.py and science modules in fink-science need to be modified). Do not forget to drop these quantities after all science modules as they must not be redistributed.
|
non_defect
|
compute dc mag and dc err before running science modules describe the issue ztf provides psf magnitude and error but many science modules make use of dc magnitude and dc error currently the conversion between one to another is done inside each science module that requires these quantities so this is a redundant computation action item pre compute dc mag and dc error before pandas udf and use it directly for each science module hence both py and science modules in fink science need to be modified do not forget to drop these quantities after all science modules as they must not be redistributed
| 0
|
23,317
| 3,791,513,691
|
IssuesEvent
|
2016-03-22 03:23:33
|
sky-map-team/stardroid
|
https://api.github.com/repos/sky-map-team/stardroid
|
closed
|
Declare android:required="false" for android.hardware.location.gps feature
|
auto-migrated Priority-Medium Type-Defect
|
```
This is a feature request from an OEM who ships non-GPS Android device.
Sky Map uses ACCESS_FINE_LOCATION permission and it implies
android.hardware.location.gps feature.
Thus this app is not shown in Android Market on their device because of Market
feature filtering.
http://developer.android.com/guide/topics/manifest/uses-feature-element.html#per
missions
They want us to set android:required="false" so user can download the app from
Android Market.
```
Original issue reported on code.google.com by `kevin.se...@gmail.com` on 24 Jan 2012 at 12:29
|
1.0
|
Declare android:required="false" for android.hardware.location.gps feature - ```
This is a feature request from an OEM who ships non-GPS Android device.
Sky Map uses ACCESS_FINE_LOCATION permission and it implies
android.hardware.location.gps feature.
Thus this app is not shown in Android Market on their device because of Market
feature filtering.
http://developer.android.com/guide/topics/manifest/uses-feature-element.html#per
missions
They want us to set android:required="false" so user can download the app from
Android Market.
```
Original issue reported on code.google.com by `kevin.se...@gmail.com` on 24 Jan 2012 at 12:29
|
defect
|
declare android required false for android hardware location gps feature this is a feature request from an oem who ships non gps android device sky map uses access fine location permission and it implies android hardware location gps feature thus this app is not shown in android market on their device because of market feature filtering missions they want us to set android required false so user can download the app from android market original issue reported on code google com by kevin se gmail com on jan at
| 1
|
23,007
| 3,737,598,748
|
IssuesEvent
|
2016-03-08 19:50:10
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
opened
|
Possible issue with calculating Flux with displacements
|
C: Modules P: critical T: defect
|
### Description of the enhancement or error report
We have a user reporting an unexpected result of a flux calculation when using displaced mesh with a navier stokes simulation.
### Rationale for the enhancement or information for reproducing the error
Problem was found using the navier stokes module and can be reproduced with the input file attached in the following discussion:
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/moose-users/O14Y3ibw5Ro/58tWm-fwBAAJ
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Calculations on displaced mesh could be compromised.
|
1.0
|
Possible issue with calculating Flux with displacements - ### Description of the enhancement or error report
We have a user reporting an unexpected result of a flux calculation when using displaced mesh with a navier stokes simulation.
### Rationale for the enhancement or information for reproducing the error
Problem was found using the navier stokes module and can be reproduced with the input file attached in the following discussion:
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/moose-users/O14Y3ibw5Ro/58tWm-fwBAAJ
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Calculations on displaced mesh could be compromised.
|
defect
|
possible issue with calculating flux with displacements description of the enhancement or error report we have a user reporting an unexpected result of a flux calculation when using displaced mesh with a navier stokes simulation rationale for the enhancement or information for reproducing the error problem was found using the navier stokes module and can be reproduced with the input file attached in the following discussion identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted calculations on displaced mesh could be compromised
| 1
|
154,553
| 24,315,948,565
|
IssuesEvent
|
2022-09-30 06:17:52
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Destroying reactor without fuel rods in them still creates a nuclear detonation.
|
Design Unstable
|
Destroying reactors with grenades or other means than water damage will cause a nuclear explosion even without fuel rods in them. the lack of consistency between 0% from water damage or 0% from any damage creates some confusion for the player.
Furthermore you're unable to safely destroy reactors as an objective if the outpost reactors are generated too near to the submarine, thus causing damage and radiation poisoning to people inside said submarine, or said expectation that it should be safe to grenade the reactor if it has no fuel rods in it.
Version : V0.1300.0.2
|
1.0
|
Destroying reactor without fuel rods in them still creates a nuclear detonation. - Destroying reactors with grenades or other means than water damage will cause a nuclear explosion even without fuel rods in them. the lack of consistency between 0% from water damage or 0% from any damage creates some confusion for the player.
Furthermore you're unable to safely destroy reactors as an objective if the outpost reactors are generated too near to the submarine, thus causing damage and radiation poisoning to people inside said submarine, or said expectation that it should be safe to grenade the reactor if it has no fuel rods in it.
Version : V0.1300.0.2
|
non_defect
|
destroying reactor without fuel rods in them still creates a nuclear detonation destroying reactors with grenades or other means than water damage will cause a nuclear explosion even without fuel rods in them the lack of consistency between from water damage or from any damage creates some confusion for the player furthermore you re unable to safely destroy reactors as an objective if the outpost reactors are generated too near to the submarine thus causing damage and radiation poisoning to people inside said submarine or said expectation that it should be safe to grenade the reactor if it has no fuel rods in it version
| 0
|
298,329
| 9,198,918,385
|
IssuesEvent
|
2019-03-07 13:51:06
|
telstra/open-kilda
|
https://api.github.com/repos/telstra/open-kilda
|
closed
|
Priority rerouting
|
area/api priority/1-highest
|
# Description
It is required to prioritize flows in case a reroute event occur. So we need to introduce a new parameter for a flow, and take this parameter when we issue a reroute requests
# Details
Add a new parameter `priority` to a flow. Integer, Optional, default configurable.
A logic for rerouting (simplified):
1 Find all affected flows
2 Sort them by priority ascending, creation time ascending
3 Send reroute requests based on this ordered list
Example: a flow with priority=555 will be scheduled for rerouting earlier than flow with priority=777
kafka key = correlation-id
implementation area: reroute throttling
|
1.0
|
Priority rerouting - # Description
It is required to prioritize flows in case a reroute event occur. So we need to introduce a new parameter for a flow, and take this parameter when we issue a reroute requests
# Details
Add a new parameter `priority` to a flow. Integer, Optional, default configurable.
A logic for rerouting (simplified):
1 Find all affected flows
2 Sort them by priority ascending, creation time ascending
3 Send reroute requests based on this ordered list
Example: a flow with priority=555 will be scheduled for rerouting earlier than flow with priority=777
kafka key = correlation-id
implementation area: reroute throttling
|
non_defect
|
priority rerouting description it is required to prioritize flows in case a reroute event occur so we need to introduce a new parameter for a flow and take this parameter when we issue a reroute requests details add a new parameter priority to a flow integer optional default configurable a logic for rerouting simplified find all affected flows sort them by priority ascending creation time ascending send reroute requests based on this ordered list example a flow with priority will be scheduled for rerouting earlier than flow with priority kafka key correlation id implementation area reroute throttling
| 0
|
22,772
| 3,697,704,825
|
IssuesEvent
|
2016-02-27 21:11:10
|
shy0013/reaver-wps
|
https://api.github.com/repos/shy0013/reaver-wps
|
reopened
|
Receiving random PSK password
|
auto-migrated Priority-Triage Type-Defect
|
```
Some APs that tried, ended up being shown random PSK.
But through the trying wpa_supplicant, I can get the correct psk.
Only warning about the problem, I have knowledge in programming, if you need me
to do some tests.
Thx
```
Original issue reported on code.google.com by `gcarval...@gmail.com` on 24 Jan 2012 at 10:33
* Merged into: #138
|
1.0
|
Receiving random PSK password - ```
Some APs that tried, ended up being shown random PSK.
But through the trying wpa_supplicant, I can get the correct psk.
Only warning about the problem, I have knowledge in programming, if you need me
to do some tests.
Thx
```
Original issue reported on code.google.com by `gcarval...@gmail.com` on 24 Jan 2012 at 10:33
* Merged into: #138
|
defect
|
receiving random psk password some aps that tried ended up being shown random psk but through the trying wpa supplicant i can get the correct psk only warning about the problem i have knowledge in programming if you need me to do some tests thx original issue reported on code google com by gcarval gmail com on jan at merged into
| 1
|
412,071
| 12,034,978,879
|
IssuesEvent
|
2020-04-13 17:02:57
|
InstituteforDiseaseModeling/covasim
|
https://api.github.com/repos/InstituteforDiseaseModeling/covasim
|
closed
|
Scrape workplace/industry data immediately
|
approved highpriority
|
Got a request to look at reopening workplaces, perhaps by industry type or size. So, I could use a pair of hands or more to look at https://www.bls.gov/oes/current/oes_42660.htm and grab data from there on workplaces. Specifically data on age of workers, types of industry, workplace sizes (by industry too if available), at the finest granularity you can find (this page points at the Seattle-Tacoma-Bellevue metro area). Simple csvs that can be read in as pandas tables would be great. Please reach out if you can do this and are not working on modeling itself.
|
1.0
|
Scrape workplace/industry data immediately - Got a request to look at reopening workplaces, perhaps by industry type or size. So, I could use a pair of hands or more to look at https://www.bls.gov/oes/current/oes_42660.htm and grab data from there on workplaces. Specifically data on age of workers, types of industry, workplace sizes (by industry too if available), at the finest granularity you can find (this page points at the Seattle-Tacoma-Bellevue metro area). Simple csvs that can be read in as pandas tables would be great. Please reach out if you can do this and are not working on modeling itself.
|
non_defect
|
scrape workplace industry data immediately got a request to look at reopening workplaces perhaps by industry type or size so i could use a pair of hands or more to look at and grab data from there on workplaces specifically data on age of workers types of industry workplace sizes by industry too if available at the finest granularity you can find this page points at the seattle tacoma bellevue metro area simple csvs that can be read in as pandas tables would be great please reach out if you can do this and are not working on modeling itself
| 0
|
21,955
| 6,227,618,214
|
IssuesEvent
|
2017-07-10 21:09:53
|
XceedBoucherS/TestImport5
|
https://api.github.com/repos/XceedBoucherS/TestImport5
|
closed
|
Please consider applying AllowPartiallyTrustedCallers attribute
|
CodePlex
|
<b>acolin[CodePlex]</b> <br />Please consider applying AllowPartiallyTrustedCallers attribute to the Extended WPF Toolkit assembly. Note that this attribute is applied to WPFToolkit. This MSDN page [1] has some considerations regarding the attribute.
nbsp
Although for the subset of functionality in Extended WPF Toolkit that requires full trust (eg. MessageBox) this attribute is irrelevant, for the other functionality to be available to consumers running under partial trust this attribute would need to be applied.
nbsp
Thank you for your consideration.
nbsp
[1]
http://msdn.microsoft.com/en-us/library/a4ke871b%28v=VS.85%29.aspx
|
1.0
|
Please consider applying AllowPartiallyTrustedCallers attribute - <b>acolin[CodePlex]</b> <br />Please consider applying AllowPartiallyTrustedCallers attribute to the Extended WPF Toolkit assembly. Note that this attribute is applied to WPFToolkit. This MSDN page [1] has some considerations regarding the attribute.
nbsp
Although for the subset of functionality in Extended WPF Toolkit that requires full trust (eg. MessageBox) this attribute is irrelevant, for the other functionality to be available to consumers running under partial trust this attribute would need to be applied.
nbsp
Thank you for your consideration.
nbsp
[1]
http://msdn.microsoft.com/en-us/library/a4ke871b%28v=VS.85%29.aspx
|
non_defect
|
please consider applying allowpartiallytrustedcallers attribute acolin please consider applying allowpartiallytrustedcallers attribute to the extended wpf toolkit assembly note that this attribute is applied to wpftoolkit this msdn page has some considerations regarding the attribute nbsp although for the subset of functionality in extended wpf toolkit that requires full trust eg messagebox this attribute is irrelevant for the other functionality to be available to consumers running under partial trust this attribute would need to be applied nbsp thank you for your consideration nbsp
| 0
|
75,717
| 26,012,104,785
|
IssuesEvent
|
2022-12-21 03:27:53
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
.zfs/snapshot/ directories almost, but not quite, gone after being destroyed
|
Type: Defect Status: Stale
|
(Given my luck with not finding prior cases, I'm not hopeful, but I searched a number of keywords and found a number of odd behaviors of .zfs/snapshot in issues, but none seemed to be this one.)
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 10.10
Kernel Version | 4.19.0-17-amd64
Architecture | x86_64
OpenZFS Version | v2.0.3-8~bpo10+1
### Describe the problem you're observing
I was automating replicating the contents of individual snapshots onto a new dataset without using send|recv, and I ran into a strange case.
I was doing something like
```
for SNAP in snaplist; do if [ ! -e DST/.zfs/snapshot/$SNAP ]; then rsync -a --delete-after SRC/.zfs/snapshot/$SNAP/ DST/ && zfs snapshot DST@SNAP;fi;done;
```
and to my surprise, I was winding up with it sometimes destroying all the files on DST and then syncing them anew.
It turns out what was happening was that, if SNAP was mounted in .zfs/snapshot/SNAP, automatic snapshot curation was destroying SNAP between list time and sync time, and that led to the following exciting behavior:
```
# ls SRC/.zfs/snapshot/DELETEDSNAP/ && echo "ONFIRE"
ONFIRE
# ls SRC/.zfs/snapshot/DELETEDSNAP/. && echo "ONFIRE"
ls: cannot access 'SRC/.zfs/snapshot/DELETEDSNAP/.': Object is remote
# rsync -avn --inplace --delete-after SRC/.zfs/snapshot/DELETEDSNAP/ /tmp/dummy/ && echo "EVERYTHING'S FINE";
building file list ... done
./
deleting c
deleting b
deleting a
sent 54 bytes received 18 bytes 144.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
EVERYTHING'S FINE
# mount | grep SNAP
#
```
Whoops.
There are obviously any number of ways I can avoid this - checking for errors in existence on both sides before syncing, checking for errors from the above stat call before same, using `zfs hold`...but it was still pretty surprising, to me, so I figured I'd ask and see if this is expected but counterintuitive or a bug, and at least give other people who hit it something to find.
### Include any warning/errors/backtraces from the system logs
Well, sometimes my comment in #11632 happens a while after one of the aforementioned reproducing runs. So that's "great".
|
1.0
|
.zfs/snapshot/ directories almost, but not quite, gone after being destroyed - (Given my luck with not finding prior cases, I'm not hopeful, but I searched a number of keywords and found a number of odd behaviors of .zfs/snapshot in issues, but none seemed to be this one.)
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 10.10
Kernel Version | 4.19.0-17-amd64
Architecture | x86_64
OpenZFS Version | v2.0.3-8~bpo10+1
### Describe the problem you're observing
I was automating replicating the contents of individual snapshots onto a new dataset without using send|recv, and I ran into a strange case.
I was doing something like
```
for SNAP in snaplist; do if [ ! -e DST/.zfs/snapshot/$SNAP ]; then rsync -a --delete-after SRC/.zfs/snapshot/$SNAP/ DST/ && zfs snapshot DST@SNAP;fi;done;
```
and to my surprise, I was winding up with it sometimes destroying all the files on DST and then syncing them anew.
It turns out what was happening was that, if SNAP was mounted in .zfs/snapshot/SNAP, automatic snapshot curation was destroying SNAP between list time and sync time, and that led to the following exciting behavior:
```
# ls SRC/.zfs/snapshot/DELETEDSNAP/ && echo "ONFIRE"
ONFIRE
# ls SRC/.zfs/snapshot/DELETEDSNAP/. && echo "ONFIRE"
ls: cannot access 'SRC/.zfs/snapshot/DELETEDSNAP/.': Object is remote
# rsync -avn --inplace --delete-after SRC/.zfs/snapshot/DELETEDSNAP/ /tmp/dummy/ && echo "EVERYTHING'S FINE";
building file list ... done
./
deleting c
deleting b
deleting a
sent 54 bytes received 18 bytes 144.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
EVERYTHING'S FINE
# mount | grep SNAP
#
```
Whoops.
There are obviously any number of ways I can avoid this - checking for errors in existence on both sides before syncing, checking for errors from the above stat call before same, using `zfs hold`...but it was still pretty surprising, to me, so I figured I'd ask and see if this is expected but counterintuitive or a bug, and at least give other people who hit it something to find.
### Include any warning/errors/backtraces from the system logs
Well, sometimes my comment in #11632 happens a while after one of the aforementioned reproducing runs. So that's "great".
|
defect
|
zfs snapshot directories almost but not quite gone after being destroyed given my luck with not finding prior cases i m not hopeful but i searched a number of keywords and found a number of odd behaviors of zfs snapshot in issues but none seemed to be this one system information type version name distribution name debian distribution version kernel version architecture openzfs version describe the problem you re observing i was automating replicating the contents of individual snapshots onto a new dataset without using send recv and i ran into a strange case i was doing something like for snap in snaplist do if then rsync a delete after src zfs snapshot snap dst zfs snapshot dst snap fi done and to my surprise i was winding up with it sometimes destroying all the files on dst and then syncing them anew it turns out what was happening was that if snap was mounted in zfs snapshot snap automatic snapshot curation was destroying snap between list time and sync time and that led to the following exciting behavior ls src zfs snapshot deletedsnap echo onfire onfire ls src zfs snapshot deletedsnap echo onfire ls cannot access src zfs snapshot deletedsnap object is remote rsync avn inplace delete after src zfs snapshot deletedsnap tmp dummy echo everything s fine building file list done deleting c deleting b deleting a sent bytes received bytes bytes sec total size is speedup is dry run everything s fine mount grep snap whoops there are obviously any number of ways i can avoid this checking for errors in existence on both sides before syncing checking for errors from the above stat call before same using zfs hold but it was still pretty surprising to me so i figured i d ask and see if this is expected but counterintuitive or a bug and at least give other people who hit it something to find include any warning errors backtraces from the system logs well sometimes my comment in happens a while after one of the aforementioned reproducing runs so that s great
| 1
|
37,684
| 8,474,800,442
|
IssuesEvent
|
2018-10-24 17:07:59
|
brainvisa/testbidon
|
https://api.github.com/repos/brainvisa/testbidon
|
closed
|
somanifti partial reading leaves open file
|
Category: soma-io Component: Resolution Priority: Normal Status: Closed Tracker: Defect
|
---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 13844, https://bioproj.extra.cea.fr/redmine/issues/13844
Original Date: 2015-11-14
---
Reading a full NIFTI volume is OK, but partial reading leaves an open file descriptor on the file.
|
1.0
|
somanifti partial reading leaves open file - ---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 13844, https://bioproj.extra.cea.fr/redmine/issues/13844
Original Date: 2015-11-14
---
Reading a full NIFTI volume is OK, but partial reading leaves an open file descriptor on the file.
|
defect
|
somanifti partial reading leaves open file author name riviere denis riviere denis original redmine issue original date reading a full nifti volume is ok but partial reading leaves an open file descriptor on the file
| 1
|
36,228
| 7,868,887,483
|
IssuesEvent
|
2018-06-24 06:12:33
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Overall precipitation fraction inconsistent with within-component precipitation fraction (Trac #742)
|
Migrated from Trac clubb_src defect raut@uwm.edu
|
**Introduction**
While playing with CLUBB today, I noticed that the invariant (or at least I thought it was):
precip_frac = precip_frac_1 * mixt_frac + precip_frac_2 * (1 - mixt_frac)
is actually untrue quite often in the code.
I'm not sure if this is intentional, a major bug, or a bug that nobody cares about.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/742
```json
{
"status": "closed",
"changetime": "2015-08-18T20:54:18",
"description": "'''Introduction'''\n\nWhile playing with CLUBB today, I noticed that the invariant (or at least I thought it was):\n\nprecip_frac = precip_frac_1 * mixt_frac + precip_frac_2 * (1 - mixt_frac)\n\nis actually untrue quite often in the code.\n\nI'm not sure if this is intentional, a major bug, or a bug that nobody cares about.",
"reporter": "raut@uwm.edu",
"cc": "vlarson@uwm.edu, bmg2@uwm.edu",
"resolution": "fixed",
"_ts": "1439931258300502",
"component": "clubb_src",
"summary": "Overall precipitation fraction inconsistent with within-component precipitation fraction",
"priority": "minor",
"keywords": "",
"time": "2014-10-04T06:58:15",
"milestone": "4. Fix bugs",
"owner": "raut@uwm.edu",
"type": "defect"
}
```
|
1.0
|
Overall precipitation fraction inconsistent with within-component precipitation fraction (Trac #742) - **Introduction**
While playing with CLUBB today, I noticed that the invariant (or at least I thought it was):
precip_frac = precip_frac_1 * mixt_frac + precip_frac_2 * (1 - mixt_frac)
is actually untrue quite often in the code.
I'm not sure if this is intentional, a major bug, or a bug that nobody cares about.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/742
```json
{
"status": "closed",
"changetime": "2015-08-18T20:54:18",
"description": "'''Introduction'''\n\nWhile playing with CLUBB today, I noticed that the invariant (or at least I thought it was):\n\nprecip_frac = precip_frac_1 * mixt_frac + precip_frac_2 * (1 - mixt_frac)\n\nis actually untrue quite often in the code.\n\nI'm not sure if this is intentional, a major bug, or a bug that nobody cares about.",
"reporter": "raut@uwm.edu",
"cc": "vlarson@uwm.edu, bmg2@uwm.edu",
"resolution": "fixed",
"_ts": "1439931258300502",
"component": "clubb_src",
"summary": "Overall precipitation fraction inconsistent with within-component precipitation fraction",
"priority": "minor",
"keywords": "",
"time": "2014-10-04T06:58:15",
"milestone": "4. Fix bugs",
"owner": "raut@uwm.edu",
"type": "defect"
}
```
|
defect
|
overall precipitation fraction inconsistent with within component precipitation fraction trac introduction while playing with clubb today i noticed that the invariant or at least i thought it was precip frac precip frac mixt frac precip frac mixt frac is actually untrue quite often in the code i m not sure if this is intentional a major bug or a bug that nobody cares about attachments migrated from json status closed changetime description introduction n nwhile playing with clubb today i noticed that the invariant or at least i thought it was n nprecip frac precip frac mixt frac precip frac mixt frac n nis actually untrue quite often in the code n ni m not sure if this is intentional a major bug or a bug that nobody cares about reporter raut uwm edu cc vlarson uwm edu uwm edu resolution fixed ts component clubb src summary overall precipitation fraction inconsistent with within component precipitation fraction priority minor keywords time milestone fix bugs owner raut uwm edu type defect
| 1
|
236,391
| 26,009,786,157
|
IssuesEvent
|
2022-12-20 23:47:28
|
ManageIQ/miq_bot
|
https://api.github.com/repos/ManageIQ/miq_bot
|
closed
|
CVE-2022-23476 (High) detected in nokogiri-1.13.9-x86_64-linux.gem - autoclosed
|
security vulnerability
|
## CVE-2022-23476 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.13.9-x86_64-linux.gem</b></p></summary>
<p>Nokogiri (鋸) makes it easy and painless to work with XML and HTML from Ruby. It provides a
sensible, easy-to-understand API for reading, writing, modifying, and querying documents. It is
fast and standards-compliant by relying on native parsers like libxml2 (C) and xerces (Java).
</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.13.9-x86_64-linux.gem">https://rubygems.org/gems/nokogiri-1.13.9-x86_64-linux.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/nokogiri-1.13.9.gem</p>
<p>
Dependency Hierarchy:
- rails-5.2.8.1.gem (Root Library)
- railties-5.2.8.1.gem
- actionpack-5.2.8.1.gem
- actionview-5.2.8.1.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.13.9-x86_64-linux.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/78e1ce1b0804926332f04865191e49efe4065183">78e1ce1b0804926332f04865191e49efe4065183</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri is an open source XML and HTML library for the Ruby programming language. Nokogiri `1.13.8` and `1.13.9` fail to check the return value from `xmlTextReaderExpand` in the method `Nokogiri::XML::Reader#attribute_hash`. This can lead to a null pointer exception when invalid markup is being parsed. For applications using `XML::Reader` to parse untrusted inputs, this may potentially be a vector for a denial of service attack. Users are advised to upgrade to Nokogiri `>= 1.13.10`. Users may be able to search their code for calls to either `XML::Reader#attributes` or `XML::Reader#attribute_hash` to determine if they are affected.
<p>Publish Date: 2022-12-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23476>CVE-2022-23476</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-12-08</p>
<p>Fix Resolution: nokogiri - 1.13.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-23476 (High) detected in nokogiri-1.13.9-x86_64-linux.gem - autoclosed - ## CVE-2022-23476 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.13.9-x86_64-linux.gem</b></p></summary>
<p>Nokogiri (鋸) makes it easy and painless to work with XML and HTML from Ruby. It provides a
sensible, easy-to-understand API for reading, writing, modifying, and querying documents. It is
fast and standards-compliant by relying on native parsers like libxml2 (C) and xerces (Java).
</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.13.9-x86_64-linux.gem">https://rubygems.org/gems/nokogiri-1.13.9-x86_64-linux.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/nokogiri-1.13.9.gem</p>
<p>
Dependency Hierarchy:
- rails-5.2.8.1.gem (Root Library)
- railties-5.2.8.1.gem
- actionpack-5.2.8.1.gem
- actionview-5.2.8.1.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.13.9-x86_64-linux.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/78e1ce1b0804926332f04865191e49efe4065183">78e1ce1b0804926332f04865191e49efe4065183</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri is an open source XML and HTML library for the Ruby programming language. Nokogiri `1.13.8` and `1.13.9` fail to check the return value from `xmlTextReaderExpand` in the method `Nokogiri::XML::Reader#attribute_hash`. This can lead to a null pointer exception when invalid markup is being parsed. For applications using `XML::Reader` to parse untrusted inputs, this may potentially be a vector for a denial of service attack. Users are advised to upgrade to Nokogiri `>= 1.13.10`. Users may be able to search their code for calls to either `XML::Reader#attributes` or `XML::Reader#attribute_hash` to determine if they are affected.
<p>Publish Date: 2022-12-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23476>CVE-2022-23476</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-12-08</p>
<p>Fix Resolution: nokogiri - 1.13.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in nokogiri linux gem autoclosed cve high severity vulnerability vulnerable library nokogiri linux gem nokogiri 鋸 makes it easy and painless to work with xml and html from ruby it provides a sensible easy to understand api for reading writing modifying and querying documents it is fast and standards compliant by relying on native parsers like c and xerces java library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache nokogiri gem dependency hierarchy rails gem root library railties gem actionpack gem actionview gem rails dom testing gem x nokogiri linux gem vulnerable library found in head commit a href found in base branch master vulnerability details nokogiri is an open source xml and html library for the ruby programming language nokogiri and fail to check the return value from xmltextreaderexpand in the method nokogiri xml reader attribute hash this can lead to a null pointer exception when invalid markup is being parsed for applications using xml reader to parse untrusted inputs this may potentially be a vector for a denial of service attack users are advised to upgrade to nokogiri users may be able to search their code for calls to either xml reader attributes or xml reader attribute hash to determine if they are affected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution nokogiri step up your open source security game with mend
| 0
|
308,495
| 23,250,900,720
|
IssuesEvent
|
2022-08-04 03:32:15
|
Requisitos-de-Software/2022.1-TikTok
|
https://api.github.com/repos/Requisitos-de-Software/2022.1-TikTok
|
closed
|
Atualização de cronograma - Sprint 3
|
documentation
|
## Descrição
Adição de detalhes da sprint 3
## Tarefas
- [x] Adicionar n° das issues
- [x] Adicionar descrição das issues
- [x] Adiconar data de entrega
- [x] Adicionar responsáveis
- [x] Adicionar revisores
- [x] Adicionar data de revisão
- [x] Atualizar versionamento
## Critérios de aceitação
- [x] Documento do cronograma atualizado no repositório
|
1.0
|
Atualização de cronograma - Sprint 3 - ## Descrição
Adição de detalhes da sprint 3
## Tarefas
- [x] Adicionar n° das issues
- [x] Adicionar descrição das issues
- [x] Adiconar data de entrega
- [x] Adicionar responsáveis
- [x] Adicionar revisores
- [x] Adicionar data de revisão
- [x] Atualizar versionamento
## Critérios de aceitação
- [x] Documento do cronograma atualizado no repositório
|
non_defect
|
atualização de cronograma sprint descrição adição de detalhes da sprint tarefas adicionar n° das issues adicionar descrição das issues adiconar data de entrega adicionar responsáveis adicionar revisores adicionar data de revisão atualizar versionamento critérios de aceitação documento do cronograma atualizado no repositório
| 0
|
60,622
| 17,023,474,668
|
IssuesEvent
|
2021-07-03 02:13:06
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Inappropriate preset value for tag Amenity:Bench
|
Component: admin Priority: trivial Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 8.55am, Friday, 4th September 2009]**
The preset for amenity:bench in JOSM v2024 pops up with a default value of opening hours = 24/7! Most benches I know are indeed always open but do we really need this value?!? I assume that it has become a default for all amenities?
|
1.0
|
Inappropriate preset value for tag Amenity:Bench - **[Submitted to the original trac issue database at 8.55am, Friday, 4th September 2009]**
The preset for amenity:bench in JOSM v2024 pops up with a default value of opening hours = 24/7! Most benches I know are indeed always open but do we really need this value?!? I assume that it has become a default for all amenities?
|
defect
|
inappropriate preset value for tag amenity bench the preset for amenity bench in josm pops up with a default value of opening hours most benches i know are indeed always open but do we really need this value i assume that it has become a default for all amenities
| 1
|
23,956
| 3,874,872,512
|
IssuesEvent
|
2016-04-11 22:04:29
|
ariya/phantomjs
|
https://api.github.com/repos/ariya/phantomjs
|
closed
|
PhantomJS 1.6.0 crashes on everything
|
old.Priority-Medium old.Status-New old.Type-Defect
|
_**[jum...@gmail.com](http://code.google.com/u/105611767941907564582/) commented:**_
> $ phantomjs -v
> 1.6.0
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=12.04
> DISTRIB_CODENAME=precise
> DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS"
>
> $ cat /tmp/test.js
> console.log('Loading a web page');
> var page = new WebPage();
> var url = "http://www.phantomjs.org/";
> page.open(url, function (status) {
> //Page is loaded!
> console.log('haxy');
> phantom.exit();
> });
>
> $ phantomjs /tmp/test.js
> PhantomJS has crashed. Please file a bug report at https://code.google.com/p/phantomjs/issues/entry and attach the crash dump file: /tmp/75c2c075-c72e-001d-74b2a710-6feba682.dmp
> Segmentation fault (core dumped)
>
> I have tried it on several examples also.
> Used this distribution: http://code.google.com/p/phantomjs/downloads/detail?name=phantomjs-1.6.0-linux-x86_64-dynamic.tar.bz2&can=2&q=
>
> I got segfault everytime.
>
> Good luck!
> Wojtek
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #644](http://code.google.com/p/phantomjs/issues/detail?id=644).
:star2: **4** people had starred this issue at the time of migration.
|
1.0
|
PhantomJS 1.6.0 crashes on everything - _**[jum...@gmail.com](http://code.google.com/u/105611767941907564582/) commented:**_
> $ phantomjs -v
> 1.6.0
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=12.04
> DISTRIB_CODENAME=precise
> DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS"
>
> $ cat /tmp/test.js
> console.log('Loading a web page');
> var page = new WebPage();
> var url = "http://www.phantomjs.org/";
> page.open(url, function (status) {
> //Page is loaded!
> console.log('haxy');
> phantom.exit();
> });
>
> $ phantomjs /tmp/test.js
> PhantomJS has crashed. Please file a bug report at https://code.google.com/p/phantomjs/issues/entry and attach the crash dump file: /tmp/75c2c075-c72e-001d-74b2a710-6feba682.dmp
> Segmentation fault (core dumped)
>
> I have tried it on several examples also.
> Used this distribution: http://code.google.com/p/phantomjs/downloads/detail?name=phantomjs-1.6.0-linux-x86_64-dynamic.tar.bz2&can=2&q=
>
> I got segfault everytime.
>
> Good luck!
> Wojtek
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #644](http://code.google.com/p/phantomjs/issues/detail?id=644).
:star2: **4** people had starred this issue at the time of migration.
|
defect
|
phantomjs crashes on everything commented phantomjs v cat etc lsb release distrib id ubuntu distrib release distrib codename precise distrib description quot ubuntu lts quot cat tmp test js console log loading a web page var page new webpage var url quot page open url function status page is loaded console log haxy phantom exit phantomjs tmp test js phantomjs has crashed please file a bug report at and attach the crash dump file tmp dmp segmentation fault core dumped i have tried it on several examples also used this distribution i got segfault everytime good luck wojtek disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
| 1
|
51,873
| 13,211,325,484
|
IssuesEvent
|
2020-08-15 22:19:25
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[wavereform] TODO (Trac #1197)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1197">https://code.icecube.wisc.edu/projects/icecube/ticket/1197</a>, reported by david.schultzand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "decide whether this TODO is important, and if so make a ticket for it:\n{{{\npython/wavereform.py:\t\t\t# TODO: flag FADCs that saturate outside of the ATWD window.\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "duplicate",
"time": "2015-08-19T18:10:05",
"component": "combo reconstruction",
"summary": "[wavereform] TODO",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[wavereform] TODO (Trac #1197) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1197">https://code.icecube.wisc.edu/projects/icecube/ticket/1197</a>, reported by david.schultzand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "decide whether this TODO is important, and if so make a ticket for it:\n{{{\npython/wavereform.py:\t\t\t# TODO: flag FADCs that saturate outside of the ATWD window.\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "duplicate",
"time": "2015-08-19T18:10:05",
"component": "combo reconstruction",
"summary": "[wavereform] TODO",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
|
defect
|
todo trac migrated from json status closed changetime ts description decide whether this todo is important and if so make a ticket for it n npython wavereform py t t t todo flag fadcs that saturate outside of the atwd window n reporter david schultz cc resolution duplicate time component combo reconstruction summary todo priority critical keywords milestone owner jbraun type defect
| 1
|
81,725
| 15,795,163,403
|
IssuesEvent
|
2021-04-02 12:35:52
|
numbbo/coco
|
https://api.github.com/repos/numbbo/coco
|
closed
|
Additional plots
|
Code-Postprocessing Feature-request
|
[By @brockho] An alternative to showing all 15 function groups, induced by the original 5 bbob groups, is to only display the aggregation over all functions for which both, one, or none of the objectives are unimodal.
|
1.0
|
Additional plots - [By @brockho] An alternative to showing all 15 function groups, induced by the original 5 bbob groups, is to only display the aggregation over all functions for which both, one, or none of the objectives are unimodal.
|
non_defect
|
additional plots an alternative to showing all function groups induced by the original bbob groups is to only display the aggregation over all functions for which both one or none of the objectives are unimodal
| 0
|
444,913
| 31,155,998,065
|
IssuesEvent
|
2023-08-16 13:10:29
|
Neko7sora/www.kamesuta.com
|
https://api.github.com/repos/Neko7sora/www.kamesuta.com
|
closed
|
新生活鯖 Season Re:2
|
documentation
|
## ゴール (終了条件)
新生活鯖 Season Re:2 に関して記載する
## 現状
null
## 作業内容
## 補足
## 要望
|
1.0
|
新生活鯖 Season Re:2 - ## ゴール (終了条件)
新生活鯖 Season Re:2 に関して記載する
## 現状
null
## 作業内容
## 補足
## 要望
|
non_defect
|
新生活鯖 season re ゴール 終了条件 新生活鯖 season re に関して記載する 現状 null 作業内容 補足 要望
| 0
|
439,334
| 30,691,602,369
|
IssuesEvent
|
2023-07-26 15:30:52
|
RedHat-UX/red-hat-design-system
|
https://api.github.com/repos/RedHat-UX/red-hat-design-system
|
closed
|
[docs] Design tokens section docs need review/cleanup
|
documentation for dev high priority
|
### Description
We need to clean up the design tokens section on ux dot. I included a content doc and the location of the images on Dropbox.
I don't think we need search on every page, so what do you think about putting it on one page only with all design tokens in a list?
### Acceptance Criteria
- [X] Design done
- [ ] Development done
### Image
_No response_
### Link to design doc
https://xd.adobe.com/view/e26f0898-9926-4577-92f0-bb3bb122af9a-b73c/
### Other resources
Content doc: https://docs.google.com/document/d/1RQMGHi6GF8EtJ-tI4psI1BgUmkK9DmyMcJMWFggTG1U/edit
Images: https://www.dropbox.com/sh/009wow69ose1tko/AAB0Bu8va6Rwe_tjrME4CfM1a?dl=0
|
1.0
|
[docs] Design tokens section docs need review/cleanup - ### Description
We need to clean up the design tokens section on ux dot. I included a content doc and the location of the images on Dropbox.
I don't think we need search on every page, so what do you think about putting it on one page only with all design tokens in a list?
### Acceptance Criteria
- [X] Design done
- [ ] Development done
### Image
_No response_
### Link to design doc
https://xd.adobe.com/view/e26f0898-9926-4577-92f0-bb3bb122af9a-b73c/
### Other resources
Content doc: https://docs.google.com/document/d/1RQMGHi6GF8EtJ-tI4psI1BgUmkK9DmyMcJMWFggTG1U/edit
Images: https://www.dropbox.com/sh/009wow69ose1tko/AAB0Bu8va6Rwe_tjrME4CfM1a?dl=0
|
non_defect
|
design tokens section docs need review cleanup description we need to clean up the design tokens section on ux dot i included a content doc and the location of the images on dropbox i don t think we need search on every page so what do you think about putting it on one page only with all design tokens in a list acceptance criteria design done development done image no response link to design doc other resources content doc images
| 0
|
36,374
| 9,798,165,265
|
IssuesEvent
|
2019-06-11 11:44:49
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
closed
|
`reproducibleFileOrder` does not work with include/exclude patterns
|
@build-cache
|
### Context
I was attempting to create a zip file with identical set of files, but the zip file contents are different, even though `reproducibleFileOrder` and `preserveFileTimestamps` are turned on for reproducible builds.
### Steps to Reproduce
1. Download the file `https://mirrors.edge.kernel.org/pub/software/scm/git/git-1.8.2.3.tar.gz`
2. Create the following `build.gradle` file
```
apply plugin: 'base'
task extractTar(type: Copy) {
from tarTree('git-1.8.2.3.tar.gz')
into "${project.buildDir}/tmp"
}
task goodZip(type: Zip) {
archiveBaseName = 'good'
reproducibleFileOrder = true
preserveFileTimestamps = false
from(extractTar) { include 'git-1.8.2.3/bundle.c' }
from(extractTar) { include 'git-1.8.2.3/connect.c' }
}
task badZip(type: Zip) {
archiveBaseName = 'bad'
reproducibleFileOrder = true
preserveFileTimestamps = false
from(extractTar) { include 'git-1.8.2.3/connect.c' }
from(extractTar) { include 'git-1.8.2.3/bundle.c' }
}
```
3. run `gradle clean goodZip badZip`
4. run `diff build/distributions/*`
5. The output files `bad.zip` and `good.zip` will differ.
### Your Environment
Build scan URL: https://gradle.com/s/qh744lzolsa4u
|
1.0
|
`reproducibleFileOrder` does not work with include/exclude patterns - ### Context
I was attempting to create a zip file with identical set of files, but the zip file contents are different, even though `reproducibleFileOrder` and `preserveFileTimestamps` are turned on for reproducible builds.
### Steps to Reproduce
1. Download the file `https://mirrors.edge.kernel.org/pub/software/scm/git/git-1.8.2.3.tar.gz`
2. Create the following `build.gradle` file
```
apply plugin: 'base'
task extractTar(type: Copy) {
from tarTree('git-1.8.2.3.tar.gz')
into "${project.buildDir}/tmp"
}
task goodZip(type: Zip) {
archiveBaseName = 'good'
reproducibleFileOrder = true
preserveFileTimestamps = false
from(extractTar) { include 'git-1.8.2.3/bundle.c' }
from(extractTar) { include 'git-1.8.2.3/connect.c' }
}
task badZip(type: Zip) {
archiveBaseName = 'bad'
reproducibleFileOrder = true
preserveFileTimestamps = false
from(extractTar) { include 'git-1.8.2.3/connect.c' }
from(extractTar) { include 'git-1.8.2.3/bundle.c' }
}
```
3. run `gradle clean goodZip badZip`
4. run `diff build/distributions/*`
5. The output files `bad.zip` and `good.zip` will differ.
### Your Environment
Build scan URL: https://gradle.com/s/qh744lzolsa4u
|
non_defect
|
reproduciblefileorder does not work with include exclude patterns context i was attempting to create a zip file with identical set of files but the zip file contents are different even though reproduciblefileorder and preservefiletimestamps are turned on for reproducible builds steps to reproduce download the file create the following build gradle file apply plugin base task extracttar type copy from tartree git tar gz into project builddir tmp task goodzip type zip archivebasename good reproduciblefileorder true preservefiletimestamps false from extracttar include git bundle c from extracttar include git connect c task badzip type zip archivebasename bad reproduciblefileorder true preservefiletimestamps false from extracttar include git connect c from extracttar include git bundle c run gradle clean goodzip badzip run diff build distributions the output files bad zip and good zip will differ your environment build scan url
| 0
|
225,369
| 17,264,004,485
|
IssuesEvent
|
2021-07-22 11:33:51
|
ProjectDrawdown/solutions
|
https://api.github.com/repos/ProjectDrawdown/solutions
|
closed
|
Updates needed to CONTRIBUTING page
|
documentation
|
https://github.com/ProjectDrawdown/solutions/blob/develop/CONTRIBUTING.md
Problem:
Our contributing page is a little out of date. We need to set clear process and communication standards for our open source community.
Acceptance Criteria:
- New open source contributors can access this page and see a up to date and accurate process for successful async collaboration.
- devs know where to look for issues that may be good for them
- devs understand how to submit their work for PR review
- code standards
- testing process & standards
|
1.0
|
Updates needed to CONTRIBUTING page - https://github.com/ProjectDrawdown/solutions/blob/develop/CONTRIBUTING.md
Problem:
Our contributing page is a little out of date. We need to set clear process and communication standards for our open source community.
Acceptance Criteria:
- New open source contributors can access this page and see a up to date and accurate process for successful async collaboration.
- devs know where to look for issues that may be good for them
- devs understand how to submit their work for PR review
- code standards
- testing process & standards
|
non_defect
|
updates needed to contributing page problem our contributing page is a little out of date we need to set clear process and communication standards for our open source community acceptance criteria new open source contributors can access this page and see a up to date and accurate process for successful async collaboration devs know where to look for issues that may be good for them devs understand how to submit their work for pr review code standards testing process standards
| 0
|
165,565
| 20,600,511,009
|
IssuesEvent
|
2022-03-06 07:00:54
|
husnuljahneer/vue-ecommerce
|
https://api.github.com/repos/husnuljahneer/vue-ecommerce
|
opened
|
axios-0.25.0.tgz: 1 vulnerabilities (highest severity is: 5.9)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.25.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/husnuljahneer/vue-ecommerce/commit/acda02eff2f9900e694b459484945999048692bc">acda02eff2f9900e694b459484945999048692bc</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.7.tgz | Transitive | 0.26.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary>
### Vulnerable Library - <b>follow-redirects-1.14.7.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- axios-0.25.0.tgz (Root Library)
- :x: **follow-redirects-1.14.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/husnuljahneer/vue-ecommerce/commit/acda02eff2f9900e694b459484945999048692bc">acda02eff2f9900e694b459484945999048692bc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (axios): 0.26.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"axios","packageVersion":"0.25.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"axios:0.25.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.26.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
True
|
axios-0.25.0.tgz: 1 vulnerabilities (highest severity is: 5.9) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.25.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/husnuljahneer/vue-ecommerce/commit/acda02eff2f9900e694b459484945999048692bc">acda02eff2f9900e694b459484945999048692bc</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.7.tgz | Transitive | 0.26.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary>
### Vulnerable Library - <b>follow-redirects-1.14.7.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- axios-0.25.0.tgz (Root Library)
- :x: **follow-redirects-1.14.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/husnuljahneer/vue-ecommerce/commit/acda02eff2f9900e694b459484945999048692bc">acda02eff2f9900e694b459484945999048692bc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (axios): 0.26.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"axios","packageVersion":"0.25.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"axios:0.25.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.26.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
non_defect
|
axios tgz vulnerabilities highest severity is vulnerable library axios tgz path to dependency file package json path to vulnerable library node modules follow redirects package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium follow redirects tgz transitive ❌ details cve vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy axios tgz root library x follow redirects tgz vulnerable library found in head commit a href found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in npm follow redirects prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution axios step up your open source security game with whitesource istransitivedependency false dependencytree axios isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails exposure of sensitive information to an unauthorized actor in npm follow redirects prior to vulnerabilityurl
| 0
|
55,523
| 14,532,479,312
|
IssuesEvent
|
2020-12-14 22:30:42
|
hpcc-systems/Tombolo
|
https://api.github.com/repos/hpcc-systems/Tombolo
|
closed
|
Group Hierarchy Issues
|
Defect
|
- [x] Don’t need line numbers in Description box
- [x] Create a group with no description. Go back and edit the description. Typed text is hidden behind the line number zone
- [x] Create several groups in a hierarchy with reasonably long names until the last group name overflows the right margin. Try to edit. The selector overflows to the next line, but cannot reach it because it disappears before you can click it. Perhaps a horizontal scroll bar on the tree view?
- [x] Group names should be sorted in the tree
- [x] Allows multiple groups with the same name under a given parent. Names should be unique within the parent scope.
- [x] In create group dialog, enter name and press Enter. Shouldn’t this be equivalent to okay? Nothing happens upon Enter
- [x] In create group dialog, enter name = ‘~~~~~’. Error message shows instantly and then exits. User should have an opportunity to fix the error
|
1.0
|
Group Hierarchy Issues - - [x] Don’t need line numbers in Description box
- [x] Create a group with no description. Go back and edit the description. Typed text is hidden behind the line number zone
- [x] Create several groups in a hierarchy with reasonably long names until the last group name overflows the right margin. Try to edit. The selector overflows to the next line, but cannot reach it because it disappears before you can click it. Perhaps a horizontal scroll bar on the tree view?
- [x] Group names should be sorted in the tree
- [x] Allows multiple groups with the same name under a given parent. Names should be unique within the parent scope.
- [x] In create group dialog, enter name and press Enter. Shouldn’t this be equivalent to okay? Nothing happens upon Enter
- [x] In create group dialog, enter name = ‘~~~~~’. Error message shows instantly and then exits. User should have an opportunity to fix the error
|
defect
|
group hierarchy issues don’t need line numbers in description box create a group with no description go back and edit the description typed text is hidden behind the line number zone create several groups in a hierarchy with reasonably long names until the last group name overflows the right margin try to edit the selector overflows to the next line but cannot reach it because it disappears before you can click it perhaps a horizontal scroll bar on the tree view group names should be sorted in the tree allows multiple groups with the same name under a given parent names should be unique within the parent scope in create group dialog enter name and press enter shouldn’t this be equivalent to okay nothing happens upon enter in create group dialog enter name ‘ ’ error message shows instantly and then exits user should have an opportunity to fix the error
| 1
|
26,245
| 12,393,934,667
|
IssuesEvent
|
2020-05-20 16:09:16
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Support System.DirectoryServices.Protocols on Linux/Mac
|
area-System.DirectoryServices enhancement os-linux os-mac-os-x untriaged up-for-grabs
|
Port `System.DirectoryServices.Protocols` to Linux & Mac -- we need to decide on x-plat LDAP library to use first
Note: Offshoot from larger topic dotnet/runtime#14734
|
1.0
|
Support System.DirectoryServices.Protocols on Linux/Mac - Port `System.DirectoryServices.Protocols` to Linux & Mac -- we need to decide on x-plat LDAP library to use first
Note: Offshoot from larger topic dotnet/runtime#14734
|
non_defect
|
support system directoryservices protocols on linux mac port system directoryservices protocols to linux mac we need to decide on x plat ldap library to use first note offshoot from larger topic dotnet runtime
| 0
|
73,649
| 24,732,811,634
|
IssuesEvent
|
2022-10-20 19:08:06
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
`m.location` body is neither a location description or a user-friendly content description
|
T-Defect
|
### Steps to reproduce
1. Send your location to a room by selecting a random point on a map
### Outcome
#### What did you expect?
Element should send a `m.location` event, whose [`body` is "A description of the location e.g. ‘Big Ben, London, UK’, or some kind of content description for accessibility e.g. ’location attachment’."](https://spec.matrix.org/v1.4/client-server-api/#mlocation)
#### What happened instead?
Element copied the `geo_uri` in the `body`:
```
{
"content": {
"body": "geo:<redacted>,<redacted>;u=<redacted>",
"geo_uri": "geo:<redacted>,<redacted>;u=<redacted>",
"msgtype": "m.location",
"org.matrix.msc1767.text": "geo:<redacted>,<redacted>;u=<redacted>",
"org.matrix.msc3488.asset": {
"type": "m.self"
},
"org.matrix.msc3488.location": {
"description": "geo:<redacted>,<redacted>;u=<redacted>",
"uri": "geo:<redacted>,<redacted>;u=<redacted>"
},
"org.matrix.msc3488.ts": 1666292534921
},
"origin_server_ts": 1666292535068,
"sender": "@<redacted>:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 283
},
"event_id": "<redacted>",
"room_id": "<redacted>"
}
```
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
1.0
|
`m.location` body is neither a location description or a user-friendly content description - ### Steps to reproduce
1. Send your location to a room by selecting a random point on a map
### Outcome
#### What did you expect?
Element should send a `m.location` event, whose [`body` is "A description of the location e.g. ‘Big Ben, London, UK’, or some kind of content description for accessibility e.g. ’location attachment’."](https://spec.matrix.org/v1.4/client-server-api/#mlocation)
#### What happened instead?
Element copied the `geo_uri` in the `body`:
```
{
"content": {
"body": "geo:<redacted>,<redacted>;u=<redacted>",
"geo_uri": "geo:<redacted>,<redacted>;u=<redacted>",
"msgtype": "m.location",
"org.matrix.msc1767.text": "geo:<redacted>,<redacted>;u=<redacted>",
"org.matrix.msc3488.asset": {
"type": "m.self"
},
"org.matrix.msc3488.location": {
"description": "geo:<redacted>,<redacted>;u=<redacted>",
"uri": "geo:<redacted>,<redacted>;u=<redacted>"
},
"org.matrix.msc3488.ts": 1666292534921
},
"origin_server_ts": 1666292535068,
"sender": "@<redacted>:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 283
},
"event_id": "<redacted>",
"room_id": "<redacted>"
}
```
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
defect
|
m location body is neither a location description or a user friendly content description steps to reproduce send your location to a room by selecting a random point on a map outcome what did you expect element should send a m location event whose what happened instead element copied the geo uri in the body content body geo u geo uri geo u msgtype m location org matrix text geo u org matrix asset type m self org matrix location description geo u uri geo u org matrix ts origin server ts sender matrix org type m room message unsigned age event id room id your phone model no response operating system version no response application version and app store no response homeserver no response will you send logs no are you willing to provide a pr no
| 1
|
33,547
| 7,158,491,437
|
IssuesEvent
|
2018-01-27 01:10:56
|
Vacuum-IM/vacuum-im
|
https://api.github.com/repos/Vacuum-IM/vacuum-im
|
closed
|
patch to enable SYSTEM QXTGLOBALSHORTCUT
|
Component-Distribution Defect Task
|
Hi,
I'am working on a rpm package for vacuum-im on fedora.
I created a patch for vaccum-im that check and use a system installed qxtglobalshortcutversion.
https://martinkg.fedorapeople.org/Packages/vacuum/qt5/vacuum-im-unbundle-qxtglobalshortcut.patch
The patch works for me on Fedora.
Can someone please check / check if the patch is correct to be included in the git.
Thanks
|
1.0
|
patch to enable SYSTEM QXTGLOBALSHORTCUT - Hi,
I'am working on a rpm package for vacuum-im on fedora.
I created a patch for vaccum-im that check and use a system installed qxtglobalshortcutversion.
https://martinkg.fedorapeople.org/Packages/vacuum/qt5/vacuum-im-unbundle-qxtglobalshortcut.patch
The patch works for me on Fedora.
Can someone please check / check if the patch is correct to be included in the git.
Thanks
|
defect
|
patch to enable system qxtglobalshortcut hi i am working on a rpm package for vacuum im on fedora i created a patch for vaccum im that check and use a system installed qxtglobalshortcutversion the patch works for me on fedora can someone please check check if the patch is correct to be included in the git thanks
| 1
|
341,260
| 24,691,279,023
|
IssuesEvent
|
2022-10-19 08:44:15
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
License header should use HTTPS rather than HTTP
|
T: Enhancement C: Documentation P: Low R: Fixed E: All Editions
|
Our license header is referencing online documents via HTTP instead of HTTPS, including:
- http://www.apache.org/licenses/LICENSE-2.0 (should be https://www.apache.org/licenses/LICENSE-2.0)
- http://www.jooq.org/licenses (should be https://www.jooq.org/legal/licensing)
|
1.0
|
License header should use HTTPS rather than HTTP - Our license header is referencing online documents via HTTP instead of HTTPS, including:
- http://www.apache.org/licenses/LICENSE-2.0 (should be https://www.apache.org/licenses/LICENSE-2.0)
- http://www.jooq.org/licenses (should be https://www.jooq.org/legal/licensing)
|
non_defect
|
license header should use https rather than http our license header is referencing online documents via http instead of https including should be should be
| 0
|
91,002
| 15,856,356,511
|
IssuesEvent
|
2021-04-08 02:08:58
|
cheroto/BirlCode
|
https://api.github.com/repos/cheroto/BirlCode
|
opened
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: BirlCode/package.json</p>
<p>Path to vulnerable library: BirlCode/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.4.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: BirlCode/package.json</p>
<p>Path to vulnerable library: BirlCode/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.4.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file birlcode package json path to vulnerable library birlcode node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest step up your open source security game with whitesource
| 0
|
563,969
| 16,706,712,171
|
IssuesEvent
|
2021-06-09 10:51:43
|
googleapis/google-api-ruby-client
|
https://api.github.com/repos/googleapis/google-api-ruby-client
|
closed
|
Synthesis failed for language-v1beta2
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate language-v1beta2. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the language-v1beta2 API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
2021-06-08 03:17:58,796 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/google-api-ruby-client
2021-06-08 03:17:59,578 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2021-06-08 03:17:59,581 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2021-06-08 03:17:59,583 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2021-06-08 03:17:59,585 autosynth [DEBUG] > Running: git config push.default simple
2021-06-08 03:17:59,587 autosynth [DEBUG] > Running: git branch -f autosynth-language-v1beta2
2021-06-08 03:17:59,590 autosynth [DEBUG] > Running: git checkout autosynth-language-v1beta2
Switched to branch 'autosynth-language-v1beta2'
2021-06-08 03:17:59,795 autosynth [INFO] > Running synthtool
2021-06-08 03:17:59,795 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-language_v1beta2/synth.metadata', 'synth.py', '--']
2021-06-08 03:17:59,795 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/google-api-ruby-client/language/v1beta2/sponge_log.log
2021-06-08 03:17:59,796 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata generated/google-apis-language_v1beta2/synth.metadata synth.py -- language v1beta2
2021-06-08 03:17:59,994 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py.
On branch autosynth-language-v1beta2
nothing to commit, working tree clean
2021-06-08 03:18:00,050 synthtool [DEBUG] > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2
DEBUG:synthtool:Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2
git clean -df
bundle install
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching source index from https://rubygems.org/
Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Service Unavailable 503 (https://rubygems.org/specs.4.8.gz)>
Net::HTTPServiceUnavailable:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 Service Unavailable</title>
</head>
<body>
<h1>Error 503 Service Unavailable</h1>
<p>Service Unavailable</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-sea4458-SEA 1623147488 1343415525</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
chown -R 1000:1000 /workspace/generated
2021-06-08 03:18:07,961 synthtool [ERROR] > Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2:
None
ERROR:synthtool:Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2:
None
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py", line 41, in <module>
shell.run(command, hide_output=False)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '-v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace', '-v/var/run/docker.sock:/var/run/docker.sock', '-w', '/workspace', '-e', 'USER_GROUP=1000:1000', '--entrypoint', 'script/synth.rb', 'gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth', 'language', 'v1beta2']' returned non-zero exit status 1.
2021-06-08 03:18:07,986 autosynth [ERROR] > Synthesis failed
2021-06-08 03:18:07,986 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 293, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-language_v1beta2/synth.metadata', 'synth.py', '--', 'language', 'v1beta2']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/bdd2ce63-997e-4095-81b3-cd54e5fccbe4/targets/github%2Fsynthtool;config=default/tests;query=google-api-ruby-client;failed=false).
|
1.0
|
Synthesis failed for language-v1beta2 - Hello! Autosynth couldn't regenerate language-v1beta2. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the language-v1beta2 API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
2021-06-08 03:17:58,796 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/google-api-ruby-client
2021-06-08 03:17:59,578 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2021-06-08 03:17:59,581 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2021-06-08 03:17:59,583 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2021-06-08 03:17:59,585 autosynth [DEBUG] > Running: git config push.default simple
2021-06-08 03:17:59,587 autosynth [DEBUG] > Running: git branch -f autosynth-language-v1beta2
2021-06-08 03:17:59,590 autosynth [DEBUG] > Running: git checkout autosynth-language-v1beta2
Switched to branch 'autosynth-language-v1beta2'
2021-06-08 03:17:59,795 autosynth [INFO] > Running synthtool
2021-06-08 03:17:59,795 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-language_v1beta2/synth.metadata', 'synth.py', '--']
2021-06-08 03:17:59,795 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/google-api-ruby-client/language/v1beta2/sponge_log.log
2021-06-08 03:17:59,796 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata generated/google-apis-language_v1beta2/synth.metadata synth.py -- language v1beta2
2021-06-08 03:17:59,994 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py.
On branch autosynth-language-v1beta2
nothing to commit, working tree clean
2021-06-08 03:18:00,050 synthtool [DEBUG] > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2
DEBUG:synthtool:Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2
git clean -df
bundle install
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching source index from https://rubygems.org/
Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Service Unavailable 503 (https://rubygems.org/specs.4.8.gz)>
Net::HTTPServiceUnavailable:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 Service Unavailable</title>
</head>
<body>
<h1>Error 503 Service Unavailable</h1>
<p>Service Unavailable</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-sea4458-SEA 1623147488 1343415525</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
chown -R 1000:1000 /workspace/generated
2021-06-08 03:18:07,961 synthtool [ERROR] > Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2:
None
ERROR:synthtool:Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth language v1beta2:
None
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py", line 41, in <module>
shell.run(command, hide_output=False)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '-v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace', '-v/var/run/docker.sock:/var/run/docker.sock', '-w', '/workspace', '-e', 'USER_GROUP=1000:1000', '--entrypoint', 'script/synth.rb', 'gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth', 'language', 'v1beta2']' returned non-zero exit status 1.
2021-06-08 03:18:07,986 autosynth [ERROR] > Synthesis failed
2021-06-08 03:18:07,986 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 293, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-language_v1beta2/synth.metadata', 'synth.py', '--', 'language', 'v1beta2']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/bdd2ce63-997e-4095-81b3-cd54e5fccbe4/targets/github%2Fsynthtool;config=default/tests;query=google-api-ruby-client;failed=false).
|
non_defect
|
synthesis failed for language hello autosynth couldn t regenerate language broken heart please investigate and fix this issue within business days while it remains broken this library cannot be updated with changes to the language api and the library grows stale see for trouble shooting tips here s the output from running synth py autosynth logs will be written to tmpfs src logs google api ruby client autosynth running git config global core excludesfile home kbuilder autosynth gitignore autosynth running git config user name yoshi automation autosynth running git config user email yoshi automation google com autosynth running git config push default simple autosynth running git branch f autosynth language autosynth running git checkout autosynth language switched to branch autosynth language autosynth running synthtool autosynth autosynth log file path tmpfs src logs google api ruby client language sponge log log autosynth running tmpfs src github synthtool env bin m synthtool metadata generated google apis language synth metadata synth py language synthtool executing home kbuilder cache synthtool google api ruby client synth py on branch autosynth language nothing to commit working tree clean synthtool running docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth language debug synthtool running docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth language git clean df bundle install don t run bundler as root bundler can ask for sudo if it is needed and installing your bundle as root will break this application for all non root users on this machine fetching source index from retrying fetcher due to error bundler httperror could not fetch specs from due to underlying error bad response service unavailable net httpserviceunavailable doctype html public dtd xhtml strict en service unavailable error service unavailable service unavailable guru mediation details cache sea varnish cache server chown r workspace generated synthtool failed executing docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth language none error synthtool failed executing docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth language none traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool google api ruby client synth py line in shell run command hide output false file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize synth log path sponge log log file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
| 0
|
89,607
| 15,831,469,380
|
IssuesEvent
|
2021-04-06 13:40:02
|
azmathasan92/concourse-ci-cd
|
https://api.github.com/repos/azmathasan92/concourse-ci-cd
|
opened
|
CVE-2019-10202 (High) detected in jackson-databind-2.9.6.jar
|
security vulnerability
|
## CVE-2019-10202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: concourse-ci-cd/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.0.4.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.4.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/azmathasan92/concourse-ci-cd/commits/25189b3c991f7766c09157948e0bc21f27ada4f9">25189b3c991f7766c09157948e0bc21f27ada4f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A series of deserialization vulnerabilities have been discovered in Codehaus 1.9.x implemented in EAP 7. This CVE fixes CVE-2017-17485, CVE-2017-7525, CVE-2017-15095, CVE-2018-5968, CVE-2018-7489, CVE-2018-1000873, CVE-2019-12086 reported for FasterXML jackson-databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10202>CVE-2019-10202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://access.redhat.com/errata/RHSA-2019:2938">https://access.redhat.com/errata/RHSA-2019:2938</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: JBoss Enterprise Application Platform - 7.2.4;com.fasterxml.jackson.core:jackson-databind:2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10202 (High) detected in jackson-databind-2.9.6.jar - ## CVE-2019-10202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: concourse-ci-cd/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.0.4.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.4.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/azmathasan92/concourse-ci-cd/commits/25189b3c991f7766c09157948e0bc21f27ada4f9">25189b3c991f7766c09157948e0bc21f27ada4f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A series of deserialization vulnerabilities have been discovered in Codehaus 1.9.x implemented in EAP 7. This CVE fixes CVE-2017-17485, CVE-2017-7525, CVE-2017-15095, CVE-2018-5968, CVE-2018-7489, CVE-2018-1000873, CVE-2019-12086 reported for FasterXML jackson-databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10202>CVE-2019-10202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://access.redhat.com/errata/RHSA-2019:2938">https://access.redhat.com/errata/RHSA-2019:2938</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: JBoss Enterprise Application Platform - 7.2.4;com.fasterxml.jackson.core:jackson-databind:2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file concourse ci cd pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter webflux release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a series of deserialization vulnerabilities have been discovered in codehaus x implemented in eap this cve fixes cve cve cve cve cve cve cve reported for fasterxml jackson databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jboss enterprise application platform com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
22,992
| 3,735,418,364
|
IssuesEvent
|
2016-03-08 12:01:29
|
CocoaPods/CocoaPods
|
https://api.github.com/repos/CocoaPods/CocoaPods
|
closed
|
Get `write_plist' for Xcodeproj:Module when using acknowledgments plugin
|
d1:easy s4:awaiting validation t2:defect
|
### Command
```
/usr/local/bin/pod install
```
### Report
* What did you do?
Tried to add acknowledgments plugin
* What did you expect to happen?
* What happened instead?
Command aborted with ``NoMethodError - undefined method `write_plist' for Xcodeproj:Module``
### Stack
```
CocoaPods : 1.0.0.beta.4
Ruby : ruby 2.0.0p645 (2015-04-13 revision 50299) [universal.x86_64-darwin15]
RubyGems : 2.0.14
Host : Mac OS X 10.11.3 (15D21)
Xcode : 7.2.1 (7C1002)
Git : git version 2.5.4 (Apple Git-61)
Ruby lib dir : /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ 37c242772e47165074deea501be14254c7e640a6
```
### Plugins
```
cocoapods-acknowledgements : 1.0.0
cocoapods-deintegrate : 1.0.0.beta.1
cocoapods-plugins : 1.0.0.beta.1
cocoapods-search : 1.0.0.beta.1
cocoapods-stats : 1.0.0.beta.3
cocoapods-trunk : 1.0.0.beta.2
cocoapods-try : 1.0.0.beta.2
```
### Podfile
```ruby
platform :ios, '7.1'
plugin 'cocoapods-acknowledgements', :settings_bundle => true
inhibit_all_warnings!
# networking
pod 'AFNetworkActivityLogger'
pod 'AFNetworking', '~> 2.5'
# core
pod 'BlocksKit'
pod 'FXNotifications', '~> 1.1'
pod 'UIDeviceIdentifier', '~> 1.0.1'
pod 'MAKVONotificationCenter', '~> 0.0.2'
pod 'UICKeyChainStore', '~> 2.1'
pod 'Mantle', '~> 1.5'
pod 'libextobjc', '~> 0.4'
pod 'GVUserDefaults', '~> 1.0.0'
pod 'BlocksKit'
pod 'DTTJailbreakDetection'
# ui
pod 'PSTAlertController', '~> 1.0' # needed for iOS7 support
pod 'OpenInGoogleMaps'
pod 'UIViewController+KeyboardAnimation'
pod 'UIViewPlusPosition'
pod 'JGProgressHUD', '~> 1.3'
pod 'ColorUtils', '~> 1.1'
pod 'UIImage-Resize', '~> 1.0'
pod 'SDWebImage', '~> 3.7.0'
pod 'IRLSize'
target 'XXXXX Internal' do
pod 'OHHTTPStubs', '~> 4.7.0'
end
target 'XXXXX Unit Tests' do
inherit! :search_paths
pod 'OHHTTPStubs', '~> 4.7.0'
pod 'Expecta', :inhibit_warnings => true
end
target 'XXXXX Health Tests' do
inherit! :search_paths
pod 'Expecta', :inhibit_warnings => true
end
```
### Error
```
NoMethodError - undefined method `write_plist' for Xcodeproj:Module
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:6:in `save_metadata'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:64:in `block (4 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:56:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:56:in `block (3 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:53:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:53:in `block (2 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:63:in `section'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:46:in `block in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:109:in `call'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:109:in `block (3 levels) in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:144:in `message'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:103:in `block (2 levels) in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:101:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:101:in `block in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:144:in `message'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:100:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:466:in `run_plugins_post_install_hooks'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:447:in `perform_post_install_actions'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:116:in `install!'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command/project.rb:67:in `run_install_with_update'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command/project.rb:97:in `run'
/Library/Ruby/Gems/2.0.0/gems/claide-1.0.0.beta.1/lib/claide/command.rb:312:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command.rb:48:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/bin/pod:44:in `<top (required)>'
/usr/local/bin/pod:23:in `load'
/usr/local/bin/pod:23:in `<main>'
```
|
1.0
|
Get `write_plist' for Xcodeproj:Module when using acknowledgments plugin - ### Command
```
/usr/local/bin/pod install
```
### Report
* What did you do?
Tried to add acknowledgments plugin
* What did you expect to happen?
* What happened instead?
Command aborted with ``NoMethodError - undefined method `write_plist' for Xcodeproj:Module``
### Stack
```
CocoaPods : 1.0.0.beta.4
Ruby : ruby 2.0.0p645 (2015-04-13 revision 50299) [universal.x86_64-darwin15]
RubyGems : 2.0.14
Host : Mac OS X 10.11.3 (15D21)
Xcode : 7.2.1 (7C1002)
Git : git version 2.5.4 (Apple Git-61)
Ruby lib dir : /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ 37c242772e47165074deea501be14254c7e640a6
```
### Plugins
```
cocoapods-acknowledgements : 1.0.0
cocoapods-deintegrate : 1.0.0.beta.1
cocoapods-plugins : 1.0.0.beta.1
cocoapods-search : 1.0.0.beta.1
cocoapods-stats : 1.0.0.beta.3
cocoapods-trunk : 1.0.0.beta.2
cocoapods-try : 1.0.0.beta.2
```
### Podfile
```ruby
platform :ios, '7.1'
plugin 'cocoapods-acknowledgements', :settings_bundle => true
inhibit_all_warnings!
# networking
pod 'AFNetworkActivityLogger'
pod 'AFNetworking', '~> 2.5'
# core
pod 'BlocksKit'
pod 'FXNotifications', '~> 1.1'
pod 'UIDeviceIdentifier', '~> 1.0.1'
pod 'MAKVONotificationCenter', '~> 0.0.2'
pod 'UICKeyChainStore', '~> 2.1'
pod 'Mantle', '~> 1.5'
pod 'libextobjc', '~> 0.4'
pod 'GVUserDefaults', '~> 1.0.0'
pod 'BlocksKit'
pod 'DTTJailbreakDetection'
# ui
pod 'PSTAlertController', '~> 1.0' # needed for iOS7 support
pod 'OpenInGoogleMaps'
pod 'UIViewController+KeyboardAnimation'
pod 'UIViewPlusPosition'
pod 'JGProgressHUD', '~> 1.3'
pod 'ColorUtils', '~> 1.1'
pod 'UIImage-Resize', '~> 1.0'
pod 'SDWebImage', '~> 3.7.0'
pod 'IRLSize'
target 'XXXXX Internal' do
pod 'OHHTTPStubs', '~> 4.7.0'
end
target 'XXXXX Unit Tests' do
inherit! :search_paths
pod 'OHHTTPStubs', '~> 4.7.0'
pod 'Expecta', :inhibit_warnings => true
end
target 'XXXXX Health Tests' do
inherit! :search_paths
pod 'Expecta', :inhibit_warnings => true
end
```
### Error
```
NoMethodError - undefined method `write_plist' for Xcodeproj:Module
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:6:in `save_metadata'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:64:in `block (4 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:56:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:56:in `block (3 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:53:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:53:in `block (2 levels) in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:63:in `section'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-acknowledgements-1.0.0/lib/cocoapods_acknowledgements.rb:46:in `block in <module:CocoaPodsAcknowledgements>'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:109:in `call'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:109:in `block (3 levels) in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:144:in `message'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:103:in `block (2 levels) in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:101:in `each'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:101:in `block in run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/user_interface.rb:144:in `message'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/hooks_manager.rb:100:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:466:in `run_plugins_post_install_hooks'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:447:in `perform_post_install_actions'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/installer.rb:116:in `install!'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command/project.rb:67:in `run_install_with_update'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command/project.rb:97:in `run'
/Library/Ruby/Gems/2.0.0/gems/claide-1.0.0.beta.1/lib/claide/command.rb:312:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/lib/cocoapods/command.rb:48:in `run'
/Library/Ruby/Gems/2.0.0/gems/cocoapods-1.0.0.beta.4/bin/pod:44:in `<top (required)>'
/usr/local/bin/pod:23:in `load'
/usr/local/bin/pod:23:in `<main>'
```
|
defect
|
get write plist for xcodeproj module when using acknowledgments plugin command usr local bin pod install report what did you do tried to add acknowledgments plugin what did you expect to happen what happened instead command aborted with nomethoderror undefined method write plist for xcodeproj module stack cocoapods beta ruby ruby revision rubygems host mac os x xcode git git version apple git ruby lib dir system library frameworks ruby framework versions usr lib repositories master plugins cocoapods acknowledgements cocoapods deintegrate beta cocoapods plugins beta cocoapods search beta cocoapods stats beta cocoapods trunk beta cocoapods try beta podfile ruby platform ios plugin cocoapods acknowledgements settings bundle true inhibit all warnings networking pod afnetworkactivitylogger pod afnetworking core pod blockskit pod fxnotifications pod uideviceidentifier pod makvonotificationcenter pod uickeychainstore pod mantle pod libextobjc pod gvuserdefaults pod blockskit pod dttjailbreakdetection ui pod pstalertcontroller needed for support pod openingooglemaps pod uiviewcontroller keyboardanimation pod uiviewplusposition pod jgprogresshud pod colorutils pod uiimage resize pod sdwebimage pod irlsize target xxxxx internal do pod ohhttpstubs end target xxxxx unit tests do inherit search paths pod ohhttpstubs pod expecta inhibit warnings true end target xxxxx health tests do inherit search paths pod expecta inhibit warnings true end error nomethoderror undefined method write plist for xcodeproj module library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in save metadata library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in block levels in library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in each library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in block levels in library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in each library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in block levels in library ruby gems gems cocoapods beta lib cocoapods user interface rb in section library ruby gems gems cocoapods acknowledgements lib cocoapods acknowledgements rb in block in library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in call library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in block levels in run library ruby gems gems cocoapods beta lib cocoapods user interface rb in message library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in block levels in run library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in each library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in block in run library ruby gems gems cocoapods beta lib cocoapods user interface rb in message library ruby gems gems cocoapods beta lib cocoapods hooks manager rb in run library ruby gems gems cocoapods beta lib cocoapods installer rb in run plugins post install hooks library ruby gems gems cocoapods beta lib cocoapods installer rb in perform post install actions library ruby gems gems cocoapods beta lib cocoapods installer rb in install library ruby gems gems cocoapods beta lib cocoapods command project rb in run install with update library ruby gems gems cocoapods beta lib cocoapods command project rb in run library ruby gems gems claide beta lib claide command rb in run library ruby gems gems cocoapods beta lib cocoapods command rb in run library ruby gems gems cocoapods beta bin pod in usr local bin pod in load usr local bin pod in
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.