Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
988
| 2,594,410,097
|
IssuesEvent
|
2015-02-20 03:00:26
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
Incomplete Hydrogen generation
|
C: BALL Core P: major R: duplicate T: defect
|
**Reported by azapp on 23 Apr 41665759 02:21 UTC**
Using AddHydrogenProcessor, some Hydrogens aren't generated.
2Ferrocene.mol : one molecule misses hydrogens after using AddHydrogen
2MSG.mol : both molecules miss 2 Hydrogens
|
1.0
|
Incomplete Hydrogen generation - **Reported by azapp on 23 Apr 41665759 02:21 UTC**
Using AddHydrogenProcessor, some Hydrogens aren't generated.
2Ferrocene.mol : one molecule misses hydrogens after using AddHydrogen
2MSG.mol : both molecules miss 2 Hydrogens
|
defect
|
incomplete hydrogen generation reported by azapp on apr utc using addhydrogenprocessor some hydrogens aren t generated mol one molecule misses hydrogens after using addhydrogen mol both molecules miss hydrogens
| 1
|
204,089
| 7,083,457,503
|
IssuesEvent
|
2018-01-11 00:27:07
|
GoogleCloudPlatform/google-cloud-java
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-java
|
closed
|
BigQuery: Create new Table using schema from json file
|
api: bigquery priority: p2 type: feature request
|
I have an App Engine Standard Maven project, where a procedure need to create a Table on BigQuery.
```
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.32.0-beta</version>
</dependency>
```
I'm using the sample code provided here https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-bigquery#creating-a-table
```
TableId tableId = TableId.of(datasetId, "my_table_id");
// Table field definition
Field stringField = Field.of("StringField", LegacySQLTypeName.STRING);
// Table schema definition
Schema schema = Schema.of(stringField);
// Create a table
StandardTableDefinition tableDefinition = StandardTableDefinition.of(schema);
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));
```
The difference with my need, compared to this example, is that I have a json file which contains the schema. Here is an example:
```
[
{
"mode": "REQUIRED",
"name": "identifier",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "code",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "description",
"type": "STRING"
}
]
```
I'm unable to find an existing method which load the table schema from a json file, instead of creating it manually from Schema/FieldList/Field classes.
Something like
```
Schema schema = Schema.parseJson(jsonSchema);
```
**Is there way to load the json file or do I need to build a custom parser?**
While I'm waiting for a reply, I wrote a custom deserializer based on Gson library. It is working, but if there is an already built-in method, I'll be more than happy to use it
```
public static void main(String[] args) {
// TODO Load schema from file
String jsonSchema = "[{\"mode\":\"REQUIRED\",\"name\":\"identifier\",\"type\":\"STRING\"},{\"mode\":\"REQUIRED\",\"name\":\"code\",\"type\":\"STRING\"},{\"mode\":\"REQUIRED\",\"name\":\"description\",\"type\":\"STRING\"}]";
// Json schema uses "fields"
// com.google.cloud.bigquery.Field uses "subFields"
// FIXME Unable to use @SerializedName policy
jsonSchema = jsonSchema.replace("\"fields\"", "\"subFields\"");
// Deserialize schema with custom Gson
Field[] fields = getGson().fromJson(jsonSchema, Field[].class);
Schema schema = Schema.of(fields);
System.out.println(schema.toString());
}
public static Gson getGson() {
JsonDeserializer<LegacySQLTypeName> typeDeserializer = (jsonElement, type, deserializationContext) -> {
return LegacySQLTypeName.valueOf(jsonElement.getAsString());
};
JsonDeserializer<FieldList> subFieldsDeserializer = (jsonElement, type, deserializationContext) -> {
Field[] fields = deserializationContext.deserialize(jsonElement.getAsJsonArray(), Field[].class);
return FieldList.of(fields);
};
return new GsonBuilder()
.registerTypeAdapter(LegacySQLTypeName.class, typeDeserializer)
.registerTypeAdapter(FieldList.class, subFieldsDeserializer)
.create();
}
```
|
1.0
|
BigQuery: Create new Table using schema from json file - I have an App Engine Standard Maven project, where a procedure need to create a Table on BigQuery.
```
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.32.0-beta</version>
</dependency>
```
I'm using the sample code provided here https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-bigquery#creating-a-table
```
TableId tableId = TableId.of(datasetId, "my_table_id");
// Table field definition
Field stringField = Field.of("StringField", LegacySQLTypeName.STRING);
// Table schema definition
Schema schema = Schema.of(stringField);
// Create a table
StandardTableDefinition tableDefinition = StandardTableDefinition.of(schema);
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));
```
The difference with my need, compared to this example, is that I have a json file which contains the schema. Here is an example:
```
[
{
"mode": "REQUIRED",
"name": "identifier",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "code",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "description",
"type": "STRING"
}
]
```
I'm unable to find an existing method which load the table schema from a json file, instead of creating it manually from Schema/FieldList/Field classes.
Something like
```
Schema schema = Schema.parseJson(jsonSchema);
```
**Is there way to load the json file or do I need to build a custom parser?**
While I'm waiting for a reply, I wrote a custom deserializer based on Gson library. It is working, but if there is an already built-in method, I'll be more than happy to use it
```
public static void main(String[] args) {
// TODO Load schema from file
String jsonSchema = "[{\"mode\":\"REQUIRED\",\"name\":\"identifier\",\"type\":\"STRING\"},{\"mode\":\"REQUIRED\",\"name\":\"code\",\"type\":\"STRING\"},{\"mode\":\"REQUIRED\",\"name\":\"description\",\"type\":\"STRING\"}]";
// Json schema uses "fields"
// com.google.cloud.bigquery.Field uses "subFields"
// FIXME Unable to use @SerializedName policy
jsonSchema = jsonSchema.replace("\"fields\"", "\"subFields\"");
// Deserialize schema with custom Gson
Field[] fields = getGson().fromJson(jsonSchema, Field[].class);
Schema schema = Schema.of(fields);
System.out.println(schema.toString());
}
public static Gson getGson() {
JsonDeserializer<LegacySQLTypeName> typeDeserializer = (jsonElement, type, deserializationContext) -> {
return LegacySQLTypeName.valueOf(jsonElement.getAsString());
};
JsonDeserializer<FieldList> subFieldsDeserializer = (jsonElement, type, deserializationContext) -> {
Field[] fields = deserializationContext.deserialize(jsonElement.getAsJsonArray(), Field[].class);
return FieldList.of(fields);
};
return new GsonBuilder()
.registerTypeAdapter(LegacySQLTypeName.class, typeDeserializer)
.registerTypeAdapter(FieldList.class, subFieldsDeserializer)
.create();
}
```
|
non_defect
|
bigquery create new table using schema from json file i have an app engine standard maven project where a procedure need to create a table on bigquery com google cloud google cloud bigquery beta i m using the sample code provided here tableid tableid tableid of datasetid my table id table field definition field stringfield field of stringfield legacysqltypename string table schema definition schema schema schema of stringfield create a table standardtabledefinition tabledefinition standardtabledefinition of schema table createdtable bigquery create tableinfo of tableid tabledefinition the difference with my need compared to this example is that i have a json file which contains the schema here is an example mode required name identifier type string mode required name code type string mode required name description type string i m unable to find an existing method which load the table schema from a json file instead of creating it manually from schema fieldlist field classes something like schema schema schema parsejson jsonschema is there way to load the json file or do i need to build a custom parser while i m waiting for a reply i wrote a custom deserializer based on gson library it is working but if there is an already built in method i ll be more than happy to use it public static void main string args todo load schema from file string jsonschema json schema uses fields com google cloud bigquery field uses subfields fixme unable to use serializedname policy jsonschema jsonschema replace fields subfields deserialize schema with custom gson field fields getgson fromjson jsonschema field class schema schema schema of fields system out println schema tostring public static gson getgson jsondeserializer typedeserializer jsonelement type deserializationcontext return legacysqltypename valueof jsonelement getasstring jsondeserializer subfieldsdeserializer jsonelement type deserializationcontext field fields deserializationcontext deserialize jsonelement getasjsonarray field class return fieldlist of fields return new gsonbuilder registertypeadapter legacysqltypename class typedeserializer registertypeadapter fieldlist class subfieldsdeserializer create
| 0
|
78,179
| 7,622,557,536
|
IssuesEvent
|
2018-05-03 12:37:06
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Richiesta verifica metadati Comune di Negrar
|
metadata nuovo md test
|
Buongiorno,
Si richiede la verifica dei metadati per conto del Comune di Negrar.
https://www.comunenegrar.it/c023052/spid-metadata.xml
Il servizio in oggetto è realizzato con eGovernment Halley già validato in precedenza per altri clienti.
|
1.0
|
Richiesta verifica metadati Comune di Negrar - Buongiorno,
Si richiede la verifica dei metadati per conto del Comune di Negrar.
https://www.comunenegrar.it/c023052/spid-metadata.xml
Il servizio in oggetto è realizzato con eGovernment Halley già validato in precedenza per altri clienti.
|
non_defect
|
richiesta verifica metadati comune di negrar buongiorno si richiede la verifica dei metadati per conto del comune di negrar il servizio in oggetto è realizzato con egovernment halley già validato in precedenza per altri clienti
| 0
|
19,705
| 3,248,216,993
|
IssuesEvent
|
2015-10-17 04:00:21
|
jimradford/superputty
|
https://api.github.com/repos/jimradford/superputty
|
closed
|
space in password
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. start file transfer (right click -> file transfer)
2. type login and password
What is the expected output? What do you see instead?
In status bar: Invalid argument send to pscp ......
What version of the product are you using? On what operating system?
1.4.0.4, windows 7 x64
Please provide any additional information below.
```
Original issue reported on code.google.com by `baba...@gmail.com` on 16 Feb 2014 at 9:48
|
1.0
|
space in password - ```
What steps will reproduce the problem?
1. start file transfer (right click -> file transfer)
2. type login and password
What is the expected output? What do you see instead?
In status bar: Invalid argument send to pscp ......
What version of the product are you using? On what operating system?
1.4.0.4, windows 7 x64
Please provide any additional information below.
```
Original issue reported on code.google.com by `baba...@gmail.com` on 16 Feb 2014 at 9:48
|
defect
|
space in password what steps will reproduce the problem start file transfer right click file transfer type login and password what is the expected output what do you see instead in status bar invalid argument send to pscp what version of the product are you using on what operating system windows please provide any additional information below original issue reported on code google com by baba gmail com on feb at
| 1
|
347,013
| 10,423,161,429
|
IssuesEvent
|
2019-09-16 10:43:36
|
Juniper/ansible-junos-stdlib
|
https://api.github.com/repos/Juniper/ansible-junos-stdlib
|
closed
|
CI/CD failure as no VMs are deployed
|
Priority: High Status: On Hold Type: Bug
|
<!---
Verify first that your issue/request is not already reported on GitHub. -->
Issue Type
------
Test Failure
Module Name
------
All CI/CD tests are failing as we are unable to create the VM.
Summary
------
<!--- Explain the problem briefly -->
Travis tests are broken as it is unable to deploy the Junos VM. We use Ravello to deploy the VM, but since it was bought by Oracle we don't have the access to deploy the VM.
Replace Travis tests with an internal Jenkins pipeline.
|
1.0
|
CI/CD failure as no VMs are deployed - <!---
Verify first that your issue/request is not already reported on GitHub. -->
Issue Type
------
Test Failure
Module Name
------
All CI/CD tests are failing as we are unable to create the VM.
Summary
------
<!--- Explain the problem briefly -->
Travis tests are broken as it is unable to deploy the Junos VM. We use Ravello to deploy the VM, but since it was bought by Oracle we don't have the access to deploy the VM.
Replace Travis tests with an internal Jenkins pipeline.
|
non_defect
|
ci cd failure as no vms are deployed verify first that your issue request is not already reported on github issue type test failure module name all ci cd tests are failing as we are unable to create the vm summary travis tests are broken as it is unable to deploy the junos vm we use ravello to deploy the vm but since it was bought by oracle we don t have the access to deploy the vm replace travis tests with an internal jenkins pipeline
| 0
|
104,520
| 16,616,854,134
|
IssuesEvent
|
2021-06-02 17:51:06
|
Dima2021/t-vault
|
https://api.github.com/repos/Dima2021/t-vault
|
opened
|
WS-2015-0033 (High) detected in uglify-js-2.2.5.tgz
|
security vulnerability
|
## WS-2015-0033 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: t-vault/tvaultui/package.json</p>
<p>Path to vulnerable library: t-vault/tvaultui/node_modules/transformers/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- jade-1.11.0.tgz (Root Library)
- transformers-2.1.0.tgz
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process.
<p>Publish Date: 2015-07-22
<p>URL: <a href=https://github.com/mishoo/UglifyJS2/issues/751>WS-2015-0033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hakiri.io/technologies/uglifier/issues/279911d9720338">https://hakiri.io/technologies/uglifier/issues/279911d9720338</a></p>
<p>Release Date: 2020-06-07</p>
<p>Fix Resolution: Uglifier - 2.7.2;uglify-js - v2.4.24</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.2.5","packageFilePaths":["/tvaultui/package.json"],"isTransitiveDependency":true,"dependencyTree":"jade:1.11.0;transformers:2.1.0;uglify-js:2.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"Uglifier - 2.7.2;uglify-js - v2.4.24"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"WS-2015-0033","vulnerabilityDetails":"uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process.","vulnerabilityUrl":"https://github.com/mishoo/UglifyJS2/issues/751","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2015-0033 (High) detected in uglify-js-2.2.5.tgz - ## WS-2015-0033 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: t-vault/tvaultui/package.json</p>
<p>Path to vulnerable library: t-vault/tvaultui/node_modules/transformers/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- jade-1.11.0.tgz (Root Library)
- transformers-2.1.0.tgz
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process.
<p>Publish Date: 2015-07-22
<p>URL: <a href=https://github.com/mishoo/UglifyJS2/issues/751>WS-2015-0033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hakiri.io/technologies/uglifier/issues/279911d9720338">https://hakiri.io/technologies/uglifier/issues/279911d9720338</a></p>
<p>Release Date: 2020-06-07</p>
<p>Fix Resolution: Uglifier - 2.7.2;uglify-js - v2.4.24</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.2.5","packageFilePaths":["/tvaultui/package.json"],"isTransitiveDependency":true,"dependencyTree":"jade:1.11.0;transformers:2.1.0;uglify-js:2.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"Uglifier - 2.7.2;uglify-js - v2.4.24"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"WS-2015-0033","vulnerabilityDetails":"uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process.","vulnerabilityUrl":"https://github.com/mishoo/UglifyJS2/issues/751","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
ws high detected in uglify js tgz ws high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file t vault tvaultui package json path to vulnerable library t vault tvaultui node modules transformers node modules uglify js package json dependency hierarchy jade tgz root library transformers tgz x uglify js tgz vulnerable library found in head commit a href found in base branch dev vulnerability details uglifier incorrectly handles non boolean comparisons during minification the upstream library for the ruby uglifier gem uglifyjs is affected by a vulnerability that allows a specially crafted javascript file to have altered functionality after minification this bug found in uglifyjs versions and earlier was demonstrated to allow potentially malicious code to be hidden within secure code and activated by the minification process publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution uglifier uglify js isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree jade transformers uglify js isminimumfixversionavailable true minimumfixversion uglifier uglify js basebranches vulnerabilityidentifier ws vulnerabilitydetails uglifier incorrectly handles non boolean comparisons during minification the upstream library for the ruby uglifier gem uglifyjs is affected by a vulnerability that allows a specially crafted javascript file to have altered functionality after minification this bug found in uglifyjs versions and earlier was demonstrated to allow potentially malicious code to be hidden within secure code and activated by the minification process vulnerabilityurl
| 0
|
38,328
| 8,759,960,889
|
IssuesEvent
|
2018-12-15 22:08:09
|
supertuxkart/stk-code
|
https://api.github.com/repos/supertuxkart/stk-code
|
closed
|
Skidding responsiveness issues
|
C: Race Events P2: major T: defect
|
This issue is related to the fix of #3168. Simply reverting that fix and getting back the old issue would be a poor outcome, so I want a proper solution which really eliminates this serie of issue.
In the majority of situations, skidding will work fine, but when switching from one direction to another, issues still arise.
- If the check to not skid in the wrong direction is triggered, STK will wait until the system sends another key input to finally trigger skidding. Instead of skidding at the earliest possible time, it will have a significant delay which may mess badly with trajectories. To test the delay, just press a key in any text editor/box. The delay you will see between the first apparition of the pressed symbol and its repetition (ddddd...) corresponds to the skidding delay
- It is also possible to not get an expected skid at all. The easiest way to reproduce it is to quickly switch between left/right. The pattern to make the issue appear (left and right can be exchanged) : left/right/skid while the kart still overall steers left/left/right/left/right. The skid key may remain pressed as long as wanted, no skidding will be achieved afterwards, even when doing "normal" controls, until it stops being pressed and is pressed again.
|
1.0
|
Skidding responsiveness issues - This issue is related to the fix of #3168. Simply reverting that fix and getting back the old issue would be a poor outcome, so I want a proper solution which really eliminates this serie of issue.
In the majority of situations, skidding will work fine, but when switching from one direction to another, issues still arise.
- If the check to not skid in the wrong direction is triggered, STK will wait until the system sends another key input to finally trigger skidding. Instead of skidding at the earliest possible time, it will have a significant delay which may mess badly with trajectories. To test the delay, just press a key in any text editor/box. The delay you will see between the first apparition of the pressed symbol and its repetition (ddddd...) corresponds to the skidding delay
- It is also possible to not get an expected skid at all. The easiest way to reproduce it is to quickly switch between left/right. The pattern to make the issue appear (left and right can be exchanged) : left/right/skid while the kart still overall steers left/left/right/left/right. The skid key may remain pressed as long as wanted, no skidding will be achieved afterwards, even when doing "normal" controls, until it stops being pressed and is pressed again.
|
defect
|
skidding responsiveness issues this issue is related to the fix of simply reverting that fix and getting back the old issue would be a poor outcome so i want a proper solution which really eliminates this serie of issue in the majority of situations skidding will work fine but when switching from one direction to another issues still arise if the check to not skid in the wrong direction is triggered stk will wait until the system sends another key input to finally trigger skidding instead of skidding at the earliest possible time it will have a significant delay which may mess badly with trajectories to test the delay just press a key in any text editor box the delay you will see between the first apparition of the pressed symbol and its repetition ddddd corresponds to the skidding delay it is also possible to not get an expected skid at all the easiest way to reproduce it is to quickly switch between left right the pattern to make the issue appear left and right can be exchanged left right skid while the kart still overall steers left left right left right the skid key may remain pressed as long as wanted no skidding will be achieved afterwards even when doing normal controls until it stops being pressed and is pressed again
| 1
|
76,448
| 26,432,933,014
|
IssuesEvent
|
2023-01-15 02:20:57
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
TreeTable: CellEdit event not working for dynamic columns
|
:lady_beetle: defect Stale
|
I retested https://github.com/primefaces/primefaces/issues/4724. successfully with 8.0-SNAPSHOT for `p:dataTable`. However, `p:treeTable` does not work. Here is the sample for p:treeTable:
## Bean3.java
``` java
@SessionScoped @Named
public class Bean3 implements Serializable {
private static final long serialVersionUID = 1L;
public class Test implements Serializable { // Content class of TreeNodes
private static final long serialVersionUID = 1L;
public String value;
public String getValue() { return value; }
public void setValue(String value) { this.value = value; }
public int index;
public int getIndex() { return index; }
public void setIndex(int index) { this.index = index; }
public Test(String value, int index) { this.value=value; this.index=index; }
}
TreeNode root;
public TreeNode getRoot() { return root; }
public void setRoot(TreeNode root) { this.root = root; }
public void createTreeNodes() {
root=new DefaultTreeNode(new Test("Root", 0), null);
new DefaultTreeNode(new Test("One", 1), root);
new DefaultTreeNode(new Test("Two", 2), root);
}
@PostConstruct public void init() { createTreeNodes(); }
public void onCellEdit(CellEditEvent<?> e) { info("cellEdit", e); }
public void onCellEditInit(CellEditEvent<?> e) { info("cellEditInit", e); }
public void onCellEditCancel(CellEditEvent<?> e) { info("cellEditCancel", e); }
public static void info(String method, CellEditEvent<?> e) {
String message=String.format("old=%s, new=%s", e.getOldValue(), e.getNewValue());
FacesContext.getCurrentInstance().addMessage(null,
new FacesMessage(FacesMessage.SEVERITY_INFO, method+":", message));
}
}
```
## treeTable.xhtml
``` xhtml
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html"
xmlns:f="http://xmlns.jcp.org/jsf/core" xmlns:p="http://primefaces.org/ui" >
<h:head />
<h:body>
<h:form id="form">
<p:growl id="msgs" showDetail="true"/>
<p:treeTable value="#{bean3.root}" var="var" editMode="cell" editable="true" tableStyle="width:auto">
<p:ajax event="cellEditInit" listener="#{bean3.onCellEditInit}" update=":form:msgs"/>
<p:ajax event="cellEdit" listener="#{bean3.onCellEdit}" update=":form:msgs"/>
<p:columns var="property" value="#{['value','index']}" headerText ="#{property}">
<p:cellEditor>
<f:facet name="output"><h:outputText value="#{var[property]}" /></f:facet>
<f:facet name="input"><p:inputText value="#{var[property]}" /></f:facet>
</p:cellEditor>
</p:columns>
</p:treeTable>
</h:form>
</h:body>
</html>
```
## Reproduce
- Click in the top/left cell and change the value. It will not change (bug).
- The message shows `cellEdit: old=One, new=One`, but `new` should be changed.
- Also, cellEditInit is not fired for treeTable (but for dataTable)
- Hovever, in the second row, it works
|
1.0
|
TreeTable: CellEdit event not working for dynamic columns - I retested https://github.com/primefaces/primefaces/issues/4724. successfully with 8.0-SNAPSHOT for `p:dataTable`. However, `p:treeTable` does not work. Here is the sample for p:treeTable:
## Bean3.java
``` java
@SessionScoped @Named
public class Bean3 implements Serializable {
private static final long serialVersionUID = 1L;
public class Test implements Serializable { // Content class of TreeNodes
private static final long serialVersionUID = 1L;
public String value;
public String getValue() { return value; }
public void setValue(String value) { this.value = value; }
public int index;
public int getIndex() { return index; }
public void setIndex(int index) { this.index = index; }
public Test(String value, int index) { this.value=value; this.index=index; }
}
TreeNode root;
public TreeNode getRoot() { return root; }
public void setRoot(TreeNode root) { this.root = root; }
public void createTreeNodes() {
root=new DefaultTreeNode(new Test("Root", 0), null);
new DefaultTreeNode(new Test("One", 1), root);
new DefaultTreeNode(new Test("Two", 2), root);
}
@PostConstruct public void init() { createTreeNodes(); }
public void onCellEdit(CellEditEvent<?> e) { info("cellEdit", e); }
public void onCellEditInit(CellEditEvent<?> e) { info("cellEditInit", e); }
public void onCellEditCancel(CellEditEvent<?> e) { info("cellEditCancel", e); }
public static void info(String method, CellEditEvent<?> e) {
String message=String.format("old=%s, new=%s", e.getOldValue(), e.getNewValue());
FacesContext.getCurrentInstance().addMessage(null,
new FacesMessage(FacesMessage.SEVERITY_INFO, method+":", message));
}
}
```
## treeTable.xhtml
``` xhtml
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html"
xmlns:f="http://xmlns.jcp.org/jsf/core" xmlns:p="http://primefaces.org/ui" >
<h:head />
<h:body>
<h:form id="form">
<p:growl id="msgs" showDetail="true"/>
<p:treeTable value="#{bean3.root}" var="var" editMode="cell" editable="true" tableStyle="width:auto">
<p:ajax event="cellEditInit" listener="#{bean3.onCellEditInit}" update=":form:msgs"/>
<p:ajax event="cellEdit" listener="#{bean3.onCellEdit}" update=":form:msgs"/>
<p:columns var="property" value="#{['value','index']}" headerText ="#{property}">
<p:cellEditor>
<f:facet name="output"><h:outputText value="#{var[property]}" /></f:facet>
<f:facet name="input"><p:inputText value="#{var[property]}" /></f:facet>
</p:cellEditor>
</p:columns>
</p:treeTable>
</h:form>
</h:body>
</html>
```
## Reproduce
- Click in the top/left cell and change the value. It will not change (bug).
- The message shows `cellEdit: old=One, new=One`, but `new` should be changed.
- Also, cellEditInit is not fired for treeTable (but for dataTable)
- Hovever, in the second row, it works
|
defect
|
treetable celledit event not working for dynamic columns i retested successfully with snapshot for p datatable however p treetable does not work here is the sample for p treetable java java sessionscoped named public class implements serializable private static final long serialversionuid public class test implements serializable content class of treenodes private static final long serialversionuid public string value public string getvalue return value public void setvalue string value this value value public int index public int getindex return index public void setindex int index this index index public test string value int index this value value this index index treenode root public treenode getroot return root public void setroot treenode root this root root public void createtreenodes root new defaulttreenode new test root null new defaulttreenode new test one root new defaulttreenode new test two root postconstruct public void init createtreenodes public void oncelledit celleditevent e info celledit e public void oncelleditinit celleditevent e info celleditinit e public void oncelleditcancel celleditevent e info celleditcancel e public static void info string method celleditevent e string message string format old s new s e getoldvalue e getnewvalue facescontext getcurrentinstance addmessage null new facesmessage facesmessage severity info method message treetable xhtml xhtml html xmlns xmlns h xmlns f xmlns p reproduce click in the top left cell and change the value it will not change bug the message shows celledit old one new one but new should be changed also celleditinit is not fired for treetable but for datatable hovever in the second row it works
| 1
|
66,754
| 20,618,638,620
|
IssuesEvent
|
2022-03-07 15:26:09
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: Getting ConnectionFailedException while trying to use driver.close() for firefox browser.
|
I-defect needs-triaging
|
### What happened?
I have a sample code written to test normal firefox browser launch then launching web page then closing the browser using Selenium 4.1.2 with Java. It is failing with ConnectionFailedException in line driver.close()
**I am using Firefox Browser** whereas the **Exception starts with chrome**, not sure why.
### How can we reproduce the issue?
```shell
public class CaptureScreenshotExample
{
public static void main(String[] args) throws IOException
{
Format f = new SimpleDateFormat("dd_MM_yy hh_mm");
String strDate = f.format(new Date());
System.out.println("Current Date = "+strDate);
System.setProperty("webdriver.gecko.driver", "./drivers/geckodriver.exe");
WebDriver driver = new FirefoxDriver();
//Launch the web page
driver.get("https://demo.actitime.com/login.do");
//To take screen shot
TakesScreenshot ts = (TakesScreenshot)driver;
File screenshot = ts.getScreenshotAs(OutputType.FILE);
//Creating an external file
File output_ss = new File("./screenshots/" + driver.getTitle() + "_" + strDate + ".jpg");
//Copying the captured screen shot from source file to destination file
FileHandler.copy(screenshot, output_ss);
driver.close();
}
}
```
### Relevant log output
```shell
JavaScript error: **chrome://remote/content/server/WebSocketHandshake.jsm**, line 117: Error: The handshake request has incorrect Origin header http://localhost:52540
Mar 07, 2022 5:36:38 PM org.openqa.selenium.remote.http.WebSocket$Listener onError
WARNING: Invalid Status code=400 text=Bad Request
java.io.IOException: Invalid Status code=400 text=Bad Request
at org.asynchttpclient.netty.handler.WebSocketHandler.abort(WebSocketHandler.java:92)
at org.asynchttpclient.netty.handler.WebSocketHandler.handleRead(WebSocketHandler.java:118)
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:314)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:435)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" org.openqa.selenium.remote.http.ConnectionFailedException: Unable to establish websocket connection to http://localhost:52540/devtools/browser/25f0debb-b795-4375-9ba6-83c9dcd9c461
Build info: version: '4.1.2', revision: '9a5a329c5a'
System info: host: '', ip: '', os.name: 'Windows 10', os.arch: '', os.version: '10.0', java.version: '1.8.0_201'
Driver info: driver.version: RemoteWebDriver
at org.openqa.selenium.remote.http.netty.NettyWebSocket.<init>(NettyWebSocket.java:104)
at org.openqa.selenium.remote.http.netty.NettyWebSocket.lambda$create$3(NettyWebSocket.java:137)
at org.openqa.selenium.remote.http.netty.NettyClient.openSocket(NettyClient.java:118)
at org.openqa.selenium.devtools.Connection.<init>(Connection.java:77)
at org.openqa.selenium.firefox.FirefoxDriver.maybeGetDevTools(FirefoxDriver.java:310)
at org.openqa.selenium.remote.RemoteWebDriver.close(RemoteWebDriver.java:440)
at com.practice.tests.CaptureScreenshotExample.main(CaptureScreenshotExample.java:45)
```
### Operating System
Windows 10
### Selenium version
Java 4.1.2
### What are the browser(s) and version(s) where you see this issue?
Firefox V97.0.2
### What are the browser driver(s) and version(s) where you see this issue?
gecko driver v0.30.0
### Are you using Selenium Grid?
No
|
1.0
|
[🐛 Bug]: Getting ConnectionFailedException while trying to use driver.close() for firefox browser. - ### What happened?
I have a sample code written to test normal firefox browser launch then launching web page then closing the browser using Selenium 4.1.2 with Java. It is failing with ConnectionFailedException in line driver.close()
**I am using Firefox Browser** whereas the **Exception starts with chrome**, not sure why.
### How can we reproduce the issue?
```shell
public class CaptureScreenshotExample
{
public static void main(String[] args) throws IOException
{
Format f = new SimpleDateFormat("dd_MM_yy hh_mm");
String strDate = f.format(new Date());
System.out.println("Current Date = "+strDate);
System.setProperty("webdriver.gecko.driver", "./drivers/geckodriver.exe");
WebDriver driver = new FirefoxDriver();
//Launch the web page
driver.get("https://demo.actitime.com/login.do");
//To take screen shot
TakesScreenshot ts = (TakesScreenshot)driver;
File screenshot = ts.getScreenshotAs(OutputType.FILE);
//Creating an external file
File output_ss = new File("./screenshots/" + driver.getTitle() + "_" + strDate + ".jpg");
//Copying the captured screen shot from source file to destination file
FileHandler.copy(screenshot, output_ss);
driver.close();
}
}
```
### Relevant log output
```shell
JavaScript error: **chrome://remote/content/server/WebSocketHandshake.jsm**, line 117: Error: The handshake request has incorrect Origin header http://localhost:52540
Mar 07, 2022 5:36:38 PM org.openqa.selenium.remote.http.WebSocket$Listener onError
WARNING: Invalid Status code=400 text=Bad Request
java.io.IOException: Invalid Status code=400 text=Bad Request
at org.asynchttpclient.netty.handler.WebSocketHandler.abort(WebSocketHandler.java:92)
at org.asynchttpclient.netty.handler.WebSocketHandler.handleRead(WebSocketHandler.java:118)
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:314)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:435)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" org.openqa.selenium.remote.http.ConnectionFailedException: Unable to establish websocket connection to http://localhost:52540/devtools/browser/25f0debb-b795-4375-9ba6-83c9dcd9c461
Build info: version: '4.1.2', revision: '9a5a329c5a'
System info: host: '', ip: '', os.name: 'Windows 10', os.arch: '', os.version: '10.0', java.version: '1.8.0_201'
Driver info: driver.version: RemoteWebDriver
at org.openqa.selenium.remote.http.netty.NettyWebSocket.<init>(NettyWebSocket.java:104)
at org.openqa.selenium.remote.http.netty.NettyWebSocket.lambda$create$3(NettyWebSocket.java:137)
at org.openqa.selenium.remote.http.netty.NettyClient.openSocket(NettyClient.java:118)
at org.openqa.selenium.devtools.Connection.<init>(Connection.java:77)
at org.openqa.selenium.firefox.FirefoxDriver.maybeGetDevTools(FirefoxDriver.java:310)
at org.openqa.selenium.remote.RemoteWebDriver.close(RemoteWebDriver.java:440)
at com.practice.tests.CaptureScreenshotExample.main(CaptureScreenshotExample.java:45)
```
### Operating System
Windows 10
### Selenium version
Java 4.1.2
### What are the browser(s) and version(s) where you see this issue?
Firefox V97.0.2
### What are the browser driver(s) and version(s) where you see this issue?
gecko driver v0.30.0
### Are you using Selenium Grid?
No
|
defect
|
getting connectionfailedexception while trying to use driver close for firefox browser what happened i have a sample code written to test normal firefox browser launch then launching web page then closing the browser using selenium with java it is failing with connectionfailedexception in line driver close i am using firefox browser whereas the exception starts with chrome not sure why how can we reproduce the issue shell public class capturescreenshotexample public static void main string args throws ioexception format f new simpledateformat dd mm yy hh mm string strdate f format new date system out println current date strdate system setproperty webdriver gecko driver drivers geckodriver exe webdriver driver new firefoxdriver launch the web page driver get to take screen shot takesscreenshot ts takesscreenshot driver file screenshot ts getscreenshotas outputtype file creating an external file file output ss new file screenshots driver gettitle strdate jpg copying the captured screen shot from source file to destination file filehandler copy screenshot output ss driver close relevant log output shell javascript error chrome remote content server websockethandshake jsm line error the handshake request has incorrect origin header mar pm org openqa selenium remote http websocket listener onerror warning invalid status code text bad request java io ioexception invalid status code text bad request at org asynchttpclient netty handler websockethandler abort websockethandler java at org asynchttpclient netty handler websockethandler handleread websockethandler java at org asynchttpclient netty handler asynchttpclienthandler channelread asynchttpclienthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java exception in thread main org openqa selenium remote http connectionfailedexception unable to establish websocket connection to build info version revision system info host ip os name windows os arch os version java version driver info driver version remotewebdriver at org openqa selenium remote http netty nettywebsocket nettywebsocket java at org openqa selenium remote http netty nettywebsocket lambda create nettywebsocket java at org openqa selenium remote http netty nettyclient opensocket nettyclient java at org openqa selenium devtools connection connection java at org openqa selenium firefox firefoxdriver maybegetdevtools firefoxdriver java at org openqa selenium remote remotewebdriver close remotewebdriver java at com practice tests capturescreenshotexample main capturescreenshotexample java operating system windows selenium version java what are the browser s and version s where you see this issue firefox what are the browser driver s and version s where you see this issue gecko driver are you using selenium grid no
| 1
|
20,047
| 3,293,393,107
|
IssuesEvent
|
2015-10-30 18:43:23
|
mehlon/acme-sac
|
https://api.github.com/repos/mehlon/acme-sac
|
closed
|
host window resize is a "disgusting kludge"
|
auto-migrated Priority-Medium Type-Defect
|
```
host window resize is a "disgusting kludge"
devwmsz.c is copy and paste of devpointer.c with lots of redundant code.
screen image isn't actually resized. screen.image doesn't seem to know the
maximum size of available display device.
take a look at the 9vx implementation. re-implement host window resize in
acme-sac.
```
Original issue reported on code.google.com by `caerw...@gmail.com` on 17 Jul 2008 at 5:06
|
1.0
|
host window resize is a "disgusting kludge" - ```
host window resize is a "disgusting kludge"
devwmsz.c is copy and paste of devpointer.c with lots of redundant code.
screen image isn't actually resized. screen.image doesn't seem to know the
maximum size of available display device.
take a look at the 9vx implementation. re-implement host window resize in
acme-sac.
```
Original issue reported on code.google.com by `caerw...@gmail.com` on 17 Jul 2008 at 5:06
|
defect
|
host window resize is a disgusting kludge host window resize is a disgusting kludge devwmsz c is copy and paste of devpointer c with lots of redundant code screen image isn t actually resized screen image doesn t seem to know the maximum size of available display device take a look at the implementation re implement host window resize in acme sac original issue reported on code google com by caerw gmail com on jul at
| 1
|
32,172
| 6,731,545,047
|
IssuesEvent
|
2017-10-18 08:04:30
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
DataTable: issue with editable paginated datatable
|
defect invalid
|
# IMPORTANT
- !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!!
- Before you open an issue, test it with the current/newest version.
- Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster
- Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution.
- Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.)
## 1) Environment
- PrimeFaces version: 5.1
- Does it work on the newest released PrimeFaces version? Version? no
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source)
- Application server + version:
- Affected browsers: chrome, firefox, ie
## 2) Expected behavior
When creating a datatable with paginator="true" you can initialize the number of rows with rows="5". Then you can also edit cells in that table with cellEditor or just plain input text for any row.
...
## 3) Actual behavior
When you try to update a managed bean for a row in a dataTable if the rowIndex is greater than rows (for example, if rows="5" and you are editing row 6) it seems that ajax calls are not triggered.
..
## 4) Steps to reproduce
create a dataTable with paginator and rows attributes. Make some cells filled with inputText elements. You will see that the second page worth of rows cannot be changed. However, once you remove the 'rows' attribute you can update any row.
..
## 5) Sample XHTML
<p:dataTable id="elementTable"
value="#{controller.elements}" widgetVar="elementTable"
tableStyle="width:auto"
border="1"
var="e"
paginator="true"
paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}"
rowsPerPageTemplate=",5,10,25,50,100,200"
editable="true"
editMode="cell"
rowIndexVar="rowIndex"
filteredValue="#{dafController.daf.filteredElements}">
<p:column headerText="#">
<h:outputText value="#{rowIndex+1}" />
</p:column>
<p:column headerText="Technical Element Name"
sortBy="#{e.technichalElementName}"
filterBy="#{e.technichalElementName}" filterMatchMode="contains">
<p:inputText value="#{e.name}" />
</p:column>
</p:dataTable>
..
## 6) Sample bean
@ManagedBean(eager = true)
@SessionScoped
@Controller
public class Controller implements Serializable, ApplicationContextAware{
private List<Element> elements;
public List<Element> getElements() {
return elements;
}
public void setDafProjects(List<Element> elements) {
this.elements= elements;
}
..
|
1.0
|
DataTable: issue with editable paginated datatable - # IMPORTANT
- !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!!
- Before you open an issue, test it with the current/newest version.
- Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster
- Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution.
- Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.)
## 1) Environment
- PrimeFaces version: 5.1
- Does it work on the newest released PrimeFaces version? Version? no
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source)
- Application server + version:
- Affected browsers: chrome, firefox, ie
## 2) Expected behavior
When creating a datatable with paginator="true" you can initialize the number of rows with rows="5". Then you can also edit cells in that table with cellEditor or just plain input text for any row.
...
## 3) Actual behavior
When you try to update a managed bean for a row in a dataTable if the rowIndex is greater than rows (for example, if rows="5" and you are editing row 6) it seems that ajax calls are not triggered.
..
## 4) Steps to reproduce
create a dataTable with paginator and rows attributes. Make some cells filled with inputText elements. You will see that the second page worth of rows cannot be changed. However, once you remove the 'rows' attribute you can update any row.
..
## 5) Sample XHTML
<p:dataTable id="elementTable"
value="#{controller.elements}" widgetVar="elementTable"
tableStyle="width:auto"
border="1"
var="e"
paginator="true"
paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}"
rowsPerPageTemplate=",5,10,25,50,100,200"
editable="true"
editMode="cell"
rowIndexVar="rowIndex"
filteredValue="#{dafController.daf.filteredElements}">
<p:column headerText="#">
<h:outputText value="#{rowIndex+1}" />
</p:column>
<p:column headerText="Technical Element Name"
sortBy="#{e.technichalElementName}"
filterBy="#{e.technichalElementName}" filterMatchMode="contains">
<p:inputText value="#{e.name}" />
</p:column>
</p:dataTable>
..
## 6) Sample bean
@ManagedBean(eager = true)
@SessionScoped
@Controller
public class Controller implements Serializable, ApplicationContextAware{
private List<Element> elements;
public List<Element> getElements() {
return elements;
}
public void setDafProjects(List<Element> elements) {
this.elements= elements;
}
..
|
defect
|
datatable issue with editable paginated datatable important if you open an issue fill every item otherwise the issue might be closed as invalid before you open an issue test it with the current newest version try to find an explanation to your problem by yourself by simply debugging this will help us to solve your issue faster clone this repository in order to reproduce your problem you ll have better chance to receive an answer and a solution feel free to provide a pr primefaces is an open source project any fixes or improvements are welcome environment primefaces version does it work on the newest released primefaces version version no does it work on the newest sources in github build by source application server version affected browsers chrome firefox ie expected behavior when creating a datatable with paginator true you can initialize the number of rows with rows then you can also edit cells in that table with celleditor or just plain input text for any row actual behavior when you try to update a managed bean for a row in a datatable if the rowindex is greater than rows for example if rows and you are editing row it seems that ajax calls are not triggered steps to reproduce create a datatable with paginator and rows attributes make some cells filled with inputtext elements you will see that the second page worth of rows cannot be changed however once you remove the rows attribute you can update any row sample xhtml p datatable id elementtable value controller elements widgetvar elementtable tablestyle width auto border var e paginator true paginatortemplate currentpagereport firstpagelink previouspagelink pagelinks nextpagelink lastpagelink rowsperpagedropdown rowsperpagetemplate editable true editmode cell rowindexvar rowindex filteredvalue dafcontroller daf filteredelements p column headertext technical element name sortby e technichalelementname filterby e technichalelementname filtermatchmode contains sample bean managedbean eager true sessionscoped controller public class controller implements serializable applicationcontextaware private list elements public list getelements return elements public void setdafprojects list elements this elements elements
| 1
|
14,222
| 2,793,820,619
|
IssuesEvent
|
2015-05-11 13:37:31
|
elecoest/allevents-3-2
|
https://api.github.com/repos/elecoest/allevents-3-2
|
closed
|
FrontEnd - Création nouvel événement - débordement de position
|
auto-migrated Priority-Low Type-Defect
|
```
En front alors que je suis en création d'un événement, j'ai un débordement
hors position des 2 champs sélections vignette et affiche.
```
Original issue reported on code.google.com by `jjacquesh` on 24 Dec 2014 at 2:23
Attachments:
* [20141224_front_ajoutèevenement_debordement.jpg](https://storage.googleapis.com/google-code-attachments/allevents-3-2/issue-334/comment-0/20141224_front_ajoutèevenement_debordement.jpg)
|
1.0
|
FrontEnd - Création nouvel événement - débordement de position - ```
En front alors que je suis en création d'un événement, j'ai un débordement
hors position des 2 champs sélections vignette et affiche.
```
Original issue reported on code.google.com by `jjacquesh` on 24 Dec 2014 at 2:23
Attachments:
* [20141224_front_ajoutèevenement_debordement.jpg](https://storage.googleapis.com/google-code-attachments/allevents-3-2/issue-334/comment-0/20141224_front_ajoutèevenement_debordement.jpg)
|
defect
|
frontend création nouvel événement débordement de position en front alors que je suis en création d un événement j ai un débordement hors position des champs sélections vignette et affiche original issue reported on code google com by jjacquesh on dec at attachments
| 1
|
299,823
| 9,205,888,198
|
IssuesEvent
|
2019-03-08 12:00:15
|
qissue-bot/QGIS
|
https://api.github.com/repos/qissue-bot/QGIS
|
closed
|
if symbol scale < 1, points are not rendered for some symbols
|
Category: Map Canvas Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
|
---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 1185, https://issues.qgis.org/issues/1185
Original Assignee: nobody -
---
1. Open the attached shapefile in QGIS. 5 points show up.
2. In layer properties set scale to field 'scl' and choose rhombus or filled triangle symbol.
3. Apply - the point in the very bottom dissapears.
For other symbols it always remains visible though, as a single pixel at worst.
|
1.0
|
if symbol scale < 1, points are not rendered for some symbols - ---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 1185, https://issues.qgis.org/issues/1185
Original Assignee: nobody -
---
1. Open the attached shapefile in QGIS. 5 points show up.
2. In layer properties set scale to field 'scl' and choose rhombus or filled triangle symbol.
3. Apply - the point in the very bottom dissapears.
For other symbols it always remains visible though, as a single pixel at worst.
|
non_defect
|
if symbol scale points are not rendered for some symbols author name maciej sieczka maciej sieczka original redmine issue original assignee nobody open the attached shapefile in qgis points show up in layer properties set scale to field scl and choose rhombus or filled triangle symbol apply the point in the very bottom dissapears for other symbols it always remains visible though as a single pixel at worst
| 0
|
55,796
| 14,690,042,241
|
IssuesEvent
|
2021-01-02 13:16:57
|
SublimeText/PackageDev
|
https://api.github.com/repos/SublimeText/PackageDev
|
closed
|
Fix syntax tests for new ST4 builds
|
defect
|
The CSS syntax has different scope names in the latest build(s) that we need to adjust our selectors for. Ideally using a union match.
https://github.com/SublimeText/PackageDev/runs/1339690271#step:3:244
|
1.0
|
Fix syntax tests for new ST4 builds - The CSS syntax has different scope names in the latest build(s) that we need to adjust our selectors for. Ideally using a union match.
https://github.com/SublimeText/PackageDev/runs/1339690271#step:3:244
|
defect
|
fix syntax tests for new builds the css syntax has different scope names in the latest build s that we need to adjust our selectors for ideally using a union match
| 1
|
347,106
| 10,424,847,916
|
IssuesEvent
|
2019-09-16 14:22:19
|
wherebyus/general-tasks
|
https://api.github.com/repos/wherebyus/general-tasks
|
closed
|
In the events dashboard, the 140 character limit on the Description is too restrictive for existing events
|
Priority: Medium Product: Events Team: Product Type: Enhancement UX: Validated
|
## Feature or problem
We should bump it to 280, Twitter's minimum.
## UX Validation
Validated
### Suggested priority
Medium
### Stakeholders
*Submitted:* michael
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 1
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
|
1.0
|
In the events dashboard, the 140 character limit on the Description is too restrictive for existing events - ## Feature or problem
We should bump it to 280, Twitter's minimum.
## UX Validation
Validated
### Suggested priority
Medium
### Stakeholders
*Submitted:* michael
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 1
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
|
non_defect
|
in the events dashboard the character limit on the description is too restrictive for existing events feature or problem we should bump it to twitter s minimum ux validation validated suggested priority medium stakeholders submitted michael definition of done how will we know when this feature is complete subtasks a detailed list of changes that need to be made or subtasks one checkbox per brew the coffee developer estimate to help the team accurately estimate the complexity of this task take a moment to walk through this list and estimate each item at the end you can total the estimates and round to the nearest prime number if any of these are at a or higher or if the total is above a consider breaking this issue into multiple smaller issues changes to the database changes to the api testing changes to the api changes to application code adding or updating unit tests local developer testing total developer estimate additional estimate code review qa testing stakeholder sign off deploy to production total additional estimate qa notes detailed instructions for testing one checkbox per test to be completed contextual tests accessibility check cross browser check edge chrome firefox responsive check
| 0
|
98,474
| 29,926,647,897
|
IssuesEvent
|
2023-06-22 06:17:52
|
eclipse-edc/Connector
|
https://api.github.com/repos/eclipse-edc/Connector
|
closed
|
Build: feed the release version into the version-bump job
|
bug build github_actions
|
# Feature Request
Currently, when we release a version, the version-bump job takes the "old version" from `gradle.properties` and then bumps the "patch" version and appends `-SNAPSHOT`. Since the release version is never committed back to `gradle.properties`, the bump job would bump an old version.
## Which Areas Would Be Affected?
Release, the bump action in every repo
## Why Is the Feature Desired?
In [decision record "2023-05-17-release-process"](https://github.com/eclipse-edc/Connector/tree/main/docs/developer/decision-records/2023-05-17-release-process) it was decided that the semVer "minor" version gets bumped everytime, and only bugfixes will bump the "patch" version.
## Solution Proposal
Instead of taking the "oldVersion" from `gradle.properties`, just use the workflow input `edc_version` and forward it to the `bump-version/action.yml`. This has to be done in every repo.
|
1.0
|
Build: feed the release version into the version-bump job - # Feature Request
Currently, when we release a version, the version-bump job takes the "old version" from `gradle.properties` and then bumps the "patch" version and appends `-SNAPSHOT`. Since the release version is never committed back to `gradle.properties`, the bump job would bump an old version.
## Which Areas Would Be Affected?
Release, the bump action in every repo
## Why Is the Feature Desired?
In [decision record "2023-05-17-release-process"](https://github.com/eclipse-edc/Connector/tree/main/docs/developer/decision-records/2023-05-17-release-process) it was decided that the semVer "minor" version gets bumped everytime, and only bugfixes will bump the "patch" version.
## Solution Proposal
Instead of taking the "oldVersion" from `gradle.properties`, just use the workflow input `edc_version` and forward it to the `bump-version/action.yml`. This has to be done in every repo.
|
non_defect
|
build feed the release version into the version bump job feature request currently when we release a version the version bump job takes the old version from gradle properties and then bumps the patch version and appends snapshot since the release version is never committed back to gradle properties the bump job would bump an old version which areas would be affected release the bump action in every repo why is the feature desired in it was decided that the semver minor version gets bumped everytime and only bugfixes will bump the patch version solution proposal instead of taking the oldversion from gradle properties just use the workflow input edc version and forward it to the bump version action yml this has to be done in every repo
| 0
|
13,473
| 15,982,109,143
|
IssuesEvent
|
2021-04-18 02:02:13
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
opened
|
Change term - occurrenceStatus
|
Class - Occurrence Process - ready for public comment Term - change
|
## Change term
* Submitter: John Wieczorek (following discussion initiated by Steve Baskauf @baskaufs - see below)
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Term name (in lowerCamelCase): occurrenceStatus
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: A statement about the presence or absence of a Taxon within a bounded place and time.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts "present" and "absent". This term is not apt for breeding status, for which the term reproductiveCondition should be used. This term is not apt for threat status, for which one might consider using the Species Distribution Extension (http://rs.gbif.org/extension/gbif/1.0/distribution.xml - not part of the Darwin Core standard).
* Examples: `present`, `absent`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussion leading up to this change proposal can be found in Issue #238.
|
1.0
|
Change term - occurrenceStatus - ## Change term
* Submitter: John Wieczorek (following discussion initiated by Steve Baskauf @baskaufs - see below)
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Term name (in lowerCamelCase): occurrenceStatus
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: A statement about the presence or absence of a Taxon within a bounded place and time.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts "present" and "absent". This term is not apt for breeding status, for which the term reproductiveCondition should be used. This term is not apt for threat status, for which one might consider using the Species Distribution Extension (http://rs.gbif.org/extension/gbif/1.0/distribution.xml - not part of the Darwin Core standard).
* Examples: `present`, `absent`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussion leading up to this change proposal can be found in Issue #238.
|
non_defect
|
change term occurrencestatus change term submitter john wieczorek following discussion initiated by steve baskauf baskaufs see below justification why is this change necessary clarity proponents who needs this change everyone proposed new attributes of the term term name in lowercamelcase occurrencestatus organized in class e g location taxon occurrence definition of the term a statement about the presence or absence of a taxon within a bounded place and time usage comments recommendations regarding content etc recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts present and absent this term is not apt for breeding status for which the term reproductivecondition should be used this term is not apt for threat status for which one might consider using the species distribution extension not part of the darwin core standard examples present absent refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable not in abcd discussion leading up to this change proposal can be found in issue
| 0
|
31,474
| 6,535,995,050
|
IssuesEvent
|
2017-08-31 16:24:19
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
Heap-use-after-free in reducedSurface
|
C: BALL Core T: defect
|
`RSComputer::extendComponent` iterates over a local copy of `RSComputer::new_vertices_` which has been introduced for performance reasons (cf. 6575f49f514c381b68872263d3b9c76fbfc75e95). This might cause heap-use-after-free errors due to dangling pointers (element removal from `RSComputer::new_vertices_` is not correctly propagated to the local copy).
```
=================================================================
==5402==ERROR: AddressSanitizer: heap-use-after-free on address 0x6080006635fc at pc 0x7efc066827bc bp 0x7efbce9092e0 sp 0x7efbce9092d0
READ of size 4 at 0x6080006635fc thread T4 (QThread)
#0 0x7efc066827bb in BALL::RSComputer::extendComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:996
#1 0x7efc0667f478 in BALL::RSComputer::getRSComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:722
0x6080006635fc is located 92 bytes inside of 96-byte region [0x6080006635a0,0x608000663600)
freed by thread T4 (QThread) here:
#0 0x7efc0d0d0de0 in operator delete(void*) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/libasan.so.3+0xc8de0)
#1 0x7efc066f7e49 in BALL::RSVertex::~RSVertex() /home/thomas/git/apps/ball/source/STRUCTURE/RSVertex.C:39
#2 0x7efc066805e2 in BALL::RSComputer::treatEdge(BALL::RSEdge*) /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:868
#3 0x7efc0667f5b2 in BALL::RSComputer::treatFace(BALL::RSFace*)
previously allocated by thread T4 (QThread) here:
#0 0x7efc0d0d0760 in operator new(unsigned long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/libasan.so.3+0xc8760)
#1 0x7efc06682d99 in BALL::RSComputer::extendComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:1041
#2 0x7efc0667f478 in BALL::RSComputer::getRSComponent()
SUMMARY: AddressSanitizer: heap-use-after-free /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:996 in BALL::RSComputer::extendComponent()
==5402==ABORTING
```
|
1.0
|
Heap-use-after-free in reducedSurface - `RSComputer::extendComponent` iterates over a local copy of `RSComputer::new_vertices_` which has been introduced for performance reasons (cf. 6575f49f514c381b68872263d3b9c76fbfc75e95). This might cause heap-use-after-free errors due to dangling pointers (element removal from `RSComputer::new_vertices_` is not correctly propagated to the local copy).
```
=================================================================
==5402==ERROR: AddressSanitizer: heap-use-after-free on address 0x6080006635fc at pc 0x7efc066827bc bp 0x7efbce9092e0 sp 0x7efbce9092d0
READ of size 4 at 0x6080006635fc thread T4 (QThread)
#0 0x7efc066827bb in BALL::RSComputer::extendComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:996
#1 0x7efc0667f478 in BALL::RSComputer::getRSComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:722
0x6080006635fc is located 92 bytes inside of 96-byte region [0x6080006635a0,0x608000663600)
freed by thread T4 (QThread) here:
#0 0x7efc0d0d0de0 in operator delete(void*) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/libasan.so.3+0xc8de0)
#1 0x7efc066f7e49 in BALL::RSVertex::~RSVertex() /home/thomas/git/apps/ball/source/STRUCTURE/RSVertex.C:39
#2 0x7efc066805e2 in BALL::RSComputer::treatEdge(BALL::RSEdge*) /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:868
#3 0x7efc0667f5b2 in BALL::RSComputer::treatFace(BALL::RSFace*)
previously allocated by thread T4 (QThread) here:
#0 0x7efc0d0d0760 in operator new(unsigned long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/libasan.so.3+0xc8760)
#1 0x7efc06682d99 in BALL::RSComputer::extendComponent() /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:1041
#2 0x7efc0667f478 in BALL::RSComputer::getRSComponent()
SUMMARY: AddressSanitizer: heap-use-after-free /home/thomas/git/apps/ball/source/STRUCTURE/reducedSurface.C:996 in BALL::RSComputer::extendComponent()
==5402==ABORTING
```
|
defect
|
heap use after free in reducedsurface rscomputer extendcomponent iterates over a local copy of rscomputer new vertices which has been introduced for performance reasons cf this might cause heap use after free errors due to dangling pointers element removal from rscomputer new vertices is not correctly propagated to the local copy error addresssanitizer heap use after free on address at pc bp sp read of size at thread qthread in ball rscomputer extendcomponent home thomas git apps ball source structure reducedsurface c in ball rscomputer getrscomponent home thomas git apps ball source structure reducedsurface c is located bytes inside of byte region freed by thread qthread here in operator delete void usr lib gcc pc linux gnu libasan so in ball rsvertex rsvertex home thomas git apps ball source structure rsvertex c in ball rscomputer treatedge ball rsedge home thomas git apps ball source structure reducedsurface c in ball rscomputer treatface ball rsface previously allocated by thread qthread here in operator new unsigned long usr lib gcc pc linux gnu libasan so in ball rscomputer extendcomponent home thomas git apps ball source structure reducedsurface c in ball rscomputer getrscomponent summary addresssanitizer heap use after free home thomas git apps ball source structure reducedsurface c in ball rscomputer extendcomponent aborting
| 1
|
22,761
| 3,697,395,069
|
IssuesEvent
|
2016-02-27 17:04:28
|
garglk/garglk
|
https://api.github.com/repos/garglk/garglk
|
closed
|
Parameter declarations of function my_malloc differ in signedness
|
auto-migrated Priority-Medium Type-Defect
|
```
Debian package version: 2011.1a-2
During a rebuild of all Debian packages in a clean sid chroot (using cowbuilder
and pbuilder) the build failed with the following error. Please note that we
use our research compiler tool-chain (using tools from the cbmc package), which
permits extended reporting on type inconsistencies at link time.
[...]
Link build/linux.release/geas/geas
build/linux.release/garglk/libgarglk.a(sndsdl.o): In function
`glk_schannel_play_ext':
sndsdl.c:(.text+0xd21): warning: the use of `tempnam' is dangerous, better use
`mkstemp'
error: conflicting function declarations "my_malloc"
old definition in module babel_handler file support/babel/babel_handler.c line
186
void * (signed int, char *)
new definition in module misc file support/babel/misc.c line 12
void * (unsigned int size, char *rs)
Observe the difference in the first parameter; this will only be safe as long
as all values remain sufficiently small. Even if it is, this should be fixed to
ensure the compiler can generate appropriate diagnostics.
Best,
Michael
```
Original issue reported on code.google.com by `michael....@gmail.com` on 13 Jun 2014 at 2:12
|
1.0
|
Parameter declarations of function my_malloc differ in signedness - ```
Debian package version: 2011.1a-2
During a rebuild of all Debian packages in a clean sid chroot (using cowbuilder
and pbuilder) the build failed with the following error. Please note that we
use our research compiler tool-chain (using tools from the cbmc package), which
permits extended reporting on type inconsistencies at link time.
[...]
Link build/linux.release/geas/geas
build/linux.release/garglk/libgarglk.a(sndsdl.o): In function
`glk_schannel_play_ext':
sndsdl.c:(.text+0xd21): warning: the use of `tempnam' is dangerous, better use
`mkstemp'
error: conflicting function declarations "my_malloc"
old definition in module babel_handler file support/babel/babel_handler.c line
186
void * (signed int, char *)
new definition in module misc file support/babel/misc.c line 12
void * (unsigned int size, char *rs)
Observe the difference in the first parameter; this will only be safe as long
as all values remain sufficiently small. Even if it is, this should be fixed to
ensure the compiler can generate appropriate diagnostics.
Best,
Michael
```
Original issue reported on code.google.com by `michael....@gmail.com` on 13 Jun 2014 at 2:12
|
defect
|
parameter declarations of function my malloc differ in signedness debian package version during a rebuild of all debian packages in a clean sid chroot using cowbuilder and pbuilder the build failed with the following error please note that we use our research compiler tool chain using tools from the cbmc package which permits extended reporting on type inconsistencies at link time link build linux release geas geas build linux release garglk libgarglk a sndsdl o in function glk schannel play ext sndsdl c text warning the use of tempnam is dangerous better use mkstemp error conflicting function declarations my malloc old definition in module babel handler file support babel babel handler c line void signed int char new definition in module misc file support babel misc c line void unsigned int size char rs observe the difference in the first parameter this will only be safe as long as all values remain sufficiently small even if it is this should be fixed to ensure the compiler can generate appropriate diagnostics best michael original issue reported on code google com by michael gmail com on jun at
| 1
|
56,093
| 3,078,173,729
|
IssuesEvent
|
2015-08-21 08:25:00
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
disk gets full after some time on the rqworker machine
|
bug CODmigration GeoPreview Priority-High
|
After a batch about 200 geo-enabled datasets were updated I see this error in some of them.
```
"shape_info": "{\"error_class\": \"FolderCreationException\", \"layer_id\": \"pre_2e893679_f802_4c5d_a245_0045d1105cfd\", \"error_type\": \"None\", \"state\": \"failure\", \"message\": \"A problem occured while creating folder /tmp/pre_2e893679_f802_4c5d_a245_0045d1105cfd: [Errno 28] No space left on device: '/tmp/pre_2e893679_f802_4c5d_a245_0045d1105cfd'\", \"type\": \"folder-creation-problem\"}",
```
The problem is that all these files get downloaded to /tmp but are not deleted and we run out of disk space. Either @teodorescuserban should create a script to regularly delete them or I can change the gislayer to delete them after they are being processed.
Pinging @danmihaila and @cjhendrix so that they are aware of the problem
|
1.0
|
disk gets full after some time on the rqworker machine - After a batch about 200 geo-enabled datasets were updated I see this error in some of them.
```
"shape_info": "{\"error_class\": \"FolderCreationException\", \"layer_id\": \"pre_2e893679_f802_4c5d_a245_0045d1105cfd\", \"error_type\": \"None\", \"state\": \"failure\", \"message\": \"A problem occured while creating folder /tmp/pre_2e893679_f802_4c5d_a245_0045d1105cfd: [Errno 28] No space left on device: '/tmp/pre_2e893679_f802_4c5d_a245_0045d1105cfd'\", \"type\": \"folder-creation-problem\"}",
```
The problem is that all these files get downloaded to /tmp but are not deleted and we run out of disk space. Either @teodorescuserban should create a script to regularly delete them or I can change the gislayer to delete them after they are being processed.
Pinging @danmihaila and @cjhendrix so that they are aware of the problem
|
non_defect
|
disk gets full after some time on the rqworker machine after a batch about geo enabled datasets were updated i see this error in some of them shape info error class foldercreationexception layer id pre error type none state failure message a problem occured while creating folder tmp pre no space left on device tmp pre type folder creation problem the problem is that all these files get downloaded to tmp but are not deleted and we run out of disk space either teodorescuserban should create a script to regularly delete them or i can change the gislayer to delete them after they are being processed pinging danmihaila and cjhendrix so that they are aware of the problem
| 0
|
5,240
| 3,908,884,520
|
IssuesEvent
|
2016-04-19 17:20:20
|
opendatatrentino/OpenDataRise
|
https://api.github.com/repos/opendatatrentino/OpenDataRise
|
opened
|
Implement better alert dialog
|
0.3.x series enhancement P1 toreport UI Usability
|
We need a better alert dialog in the ui to display and debug exceptions.
|
True
|
Implement better alert dialog -
We need a better alert dialog in the ui to display and debug exceptions.
|
non_defect
|
implement better alert dialog we need a better alert dialog in the ui to display and debug exceptions
| 0
|
72,553
| 24,180,879,206
|
IssuesEvent
|
2022-09-23 08:46:57
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Carousel: Has no paginator link limit
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
Since PrimeFaces 11.0.0 the carousel component is missing the of the attribute "pageLinks" which was used to limit the displayed page links in the paginator area. So now there is a page link for every item in the carousel. For example, if you have 20 items in it, it will show 20 page links and so on. We need at least an option to disable the display of the page links to prevent an overflow of those.
### Reproducer
```
<div class="card">
<p:carousel circular="true">
<f:facet name="header">
<h5>Tabs</h5>
</f:facet>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
</p:carousel>
</div>
```
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
11.0.0
### Theme
Diamond
### JSF implementation
MyFaces
### JSF version
2.3
### Java version
Java 8
### Browser(s)
_No response_
|
1.0
|
Carousel: Has no paginator link limit - ### Describe the bug
Since PrimeFaces 11.0.0 the carousel component is missing the of the attribute "pageLinks" which was used to limit the displayed page links in the paginator area. So now there is a page link for every item in the carousel. For example, if you have 20 items in it, it will show 20 page links and so on. We need at least an option to disable the display of the page links to prevent an overflow of those.
### Reproducer
```
<div class="card">
<p:carousel circular="true">
<f:facet name="header">
<h5>Tabs</h5>
</f:facet>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
<p:tab>
<p class="m-0 p-3">A</p>
</p:tab>
</p:carousel>
</div>
```
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
11.0.0
### Theme
Diamond
### JSF implementation
MyFaces
### JSF version
2.3
### Java version
Java 8
### Browser(s)
_No response_
|
defect
|
carousel has no paginator link limit describe the bug since primefaces the carousel component is missing the of the attribute pagelinks which was used to limit the displayed page links in the paginator area so now there is a page link for every item in the carousel for example if you have items in it it will show page links and so on we need at least an option to disable the display of the page links to prevent an overflow of those reproducer tabs a a a a a expected behavior no response primefaces edition no response primefaces version theme diamond jsf implementation myfaces jsf version java version java browser s no response
| 1
|
381,899
| 11,297,505,099
|
IssuesEvent
|
2020-01-17 06:15:56
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
opened
|
Wrong behavior on SCIM2 delete a user from the read only Ldap.
|
Affected/5.10.0-Alpha2 Component/SCIM Priority/High Severity/Major
|
When an admin user attempt to delete a user who is not existing the unique id read only Ldap user store, gets 404 error. This behavior is acceptable if we have write access to the user store.
Therefore this scenario behavior should change and response should be 406 since the delete methods is not allowed in read only user store.
**Request:** (user is invalid)
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/2acbee42-3561-4ae6-bf89-23fc43da6353 -H "Accept: application/scim+json"
**Response:**
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/b228b59d-db19-4064-b637-d33c31209fae -H "Accept: application/scim+json"
{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Specified resource (e.g., User) or endpoint does not exist.","status":"404"}
|
1.0
|
Wrong behavior on SCIM2 delete a user from the read only Ldap. - When an admin user attempt to delete a user who is not existing the unique id read only Ldap user store, gets 404 error. This behavior is acceptable if we have write access to the user store.
Therefore this scenario behavior should change and response should be 406 since the delete methods is not allowed in read only user store.
**Request:** (user is invalid)
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/2acbee42-3561-4ae6-bf89-23fc43da6353 -H "Accept: application/scim+json"
**Response:**
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/b228b59d-db19-4064-b637-d33c31209fae -H "Accept: application/scim+json"
{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Specified resource (e.g., User) or endpoint does not exist.","status":"404"}
|
non_defect
|
wrong behavior on delete a user from the read only ldap when an admin user attempt to delete a user who is not existing the unique id read only ldap user store gets error this behavior is acceptable if we have write access to the user store therefore this scenario behavior should change and response should be since the delete methods is not allowed in read only user store request user is invalid curl v k user admin admin x delete h accept application scim json response curl v k user admin admin x delete h accept application scim json schemas detail specified resource e g user or endpoint does not exist status
| 0
|
100,076
| 16,481,413,150
|
IssuesEvent
|
2021-05-24 12:13:02
|
yoswein/WebGoat_2.0
|
https://api.github.com/repos/yoswein/WebGoat_2.0
|
opened
|
CVE-2018-20834 (High) detected in tar-2.2.1.tgz, tar-4.4.1.tgz
|
security vulnerability
|
## CVE-2018-20834 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.1.tgz</b>, <b>tar-4.4.1.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- node-sass-4.11.0.tgz
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- chokidar-2.0.4.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/yoswein/WebGoat_2.0/commit/4d038f6521e1205e037211e7c3dcc92a82448d22">4d038f6521e1205e037211e7c3dcc92a82448d22</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834>CVE-2018-20834</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: tar - 2.2.2,4.4.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.11.0;node-gyp:3.8.0;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"},{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;chokidar:2.0.4;fsevents:1.2.4;node-pre-gyp:0.10.0;tar:4.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-20834","vulnerabilityDetails":"A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-20834 (High) detected in tar-2.2.1.tgz, tar-4.4.1.tgz - ## CVE-2018-20834 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.1.tgz</b>, <b>tar-4.4.1.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- node-sass-4.11.0.tgz
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- chokidar-2.0.4.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/yoswein/WebGoat_2.0/commit/4d038f6521e1205e037211e7c3dcc92a82448d22">4d038f6521e1205e037211e7c3dcc92a82448d22</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834>CVE-2018-20834</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: tar - 2.2.2,4.4.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.11.0;node-gyp:3.8.0;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"},{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;chokidar:2.0.4;fsevents:1.2.4;node-pre-gyp:0.10.0;tar:4.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-20834","vulnerabilityDetails":"A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in tar tgz tar tgz cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar for node library home page a href dependency hierarchy gulp sass tgz root library node sass tgz node gyp tgz x tar tgz vulnerable library tar tgz tar for node library home page a href dependency hierarchy browser sync tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in node tar before version excluding version an arbitrary file overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system in conjunction with a later plain file with the same name as the hardlink this plain file content replaces the existing file content a patch has been applied to node tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree gulp sass node sass node gyp tar isminimumfixversionavailable true minimumfixversion tar packagetype javascript node js packagename tar packageversion packagefilepaths istransitivedependency true dependencytree browser sync chokidar fsevents node pre gyp tar isminimumfixversionavailable true minimumfixversion tar basebranches vulnerabilityidentifier cve vulnerabilitydetails a vulnerability was found in node tar before version excluding version an arbitrary file overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system in conjunction with a later plain file with the same name as the hardlink this plain file content replaces the existing file content a patch has been applied to node tar vulnerabilityurl
| 0
|
5,832
| 2,610,216,291
|
IssuesEvent
|
2015-02-26 19:08:57
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
Мальвина и скотина Лена Миро
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Арсений Гришин'''
Привет всем не подскажите где можно найти
.Мальвина и скотина Лена Миро. где то видел
уже
'''Вильям Терентьев'''
Вот держи линк http://bit.ly/HxcObC
'''Василько Бобылёв'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вилли Лапин'''
Не это не влияет на баланс
'''Альбин Куликов'''
Не это не влияет на баланс
Информация о файле: Мальвина и скотина Лена
Миро
Загружен: В этом месяце
Скачан раз: 1160
Рейтинг: 310
Средняя скорость скачивания: 1441
Похожих файлов: 30
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:51
|
1.0
|
Мальвина и скотина Лена Миро - ```
'''Арсений Гришин'''
Привет всем не подскажите где можно найти
.Мальвина и скотина Лена Миро. где то видел
уже
'''Вильям Терентьев'''
Вот держи линк http://bit.ly/HxcObC
'''Василько Бобылёв'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вилли Лапин'''
Не это не влияет на баланс
'''Альбин Куликов'''
Не это не влияет на баланс
Информация о файле: Мальвина и скотина Лена
Миро
Загружен: В этом месяце
Скачан раз: 1160
Рейтинг: 310
Средняя скорость скачивания: 1441
Похожих файлов: 30
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:51
|
defect
|
мальвина и скотина лена миро арсений гришин привет всем не подскажите где можно найти мальвина и скотина лена миро где то видел уже вильям терентьев вот держи линк василько бобылёв просит ввести номер мобилы не опасно ли это вилли лапин не это не влияет на баланс альбин куликов не это не влияет на баланс информация о файле мальвина и скотина лена миро загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
20,097
| 15,007,848,104
|
IssuesEvent
|
2021-01-31 07:09:01
|
php-coder/mystamps
|
https://api.github.com/repos/php-coder/mystamps
|
opened
|
Don't show a link to the someone else estimation collection page
|
area/usability kind/bug
|
One paid user shouldn't see a link to the estimation collection page of other users. At this moment, it's shown but it gives 403 error. It seems like this check should have been added in #889
See also #1024 that should add tests for the link visibility.
|
True
|
Don't show a link to the someone else estimation collection page - One paid user shouldn't see a link to the estimation collection page of other users. At this moment, it's shown but it gives 403 error. It seems like this check should have been added in #889
See also #1024 that should add tests for the link visibility.
|
non_defect
|
don t show a link to the someone else estimation collection page one paid user shouldn t see a link to the estimation collection page of other users at this moment it s shown but it gives error it seems like this check should have been added in see also that should add tests for the link visibility
| 0
|
11,216
| 2,641,929,882
|
IssuesEvent
|
2015-03-11 20:35:00
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
transparent google logo
|
Priority-Medium Type-Defect
|
Original [issue 123](https://code.google.com/p/html5rocks/issues/detail?id=123) created by chrsmith on 2010-07-31T01:35:02.000Z:
lets drop the matted google logo and use a transparent 8bit png one.
|
1.0
|
transparent google logo - Original [issue 123](https://code.google.com/p/html5rocks/issues/detail?id=123) created by chrsmith on 2010-07-31T01:35:02.000Z:
lets drop the matted google logo and use a transparent 8bit png one.
|
defect
|
transparent google logo original created by chrsmith on lets drop the matted google logo and use a transparent png one
| 1
|
78,357
| 22,204,096,679
|
IssuesEvent
|
2022-06-07 13:37:49
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
test_provider_eptprovider occasionally aborts
|
Build/Install Bug testsuite
|
### What is the bug or the crash?
the test_provider_eptprovider test occasionally crashes.
This is a spin-off of issue #47395
### Steps to reproduce the issue
Run: `xvfb-run ctest --output-on-failure -R eptprovider` from the build dir, in a loop. It sometimes aborts.
Here's a live session from within a docker container from the same image as used by github-actions (started with .cir/run_tests.sh --interactive):
```
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........ Passed 1.12 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 1.15 sec
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........Child aborted***Exception: 0.78 sec
********* Start testing of TestQgsEptProvider *********
Config: Using QtTest library 5.12.8, Qt 5.12.8 (x86_64-little_endian-lp64 shared (dynamic) release build; by GCC 9.3.0)
PASS : TestQgsEptProvider::initTestCase()
PASS : TestQgsEptProvider::filters()
PASS : TestQgsEptProvider::encodeUri()
PASS : TestQgsEptProvider::decodeUri()
PASS : TestQgsEptProvider::preferredUri()
PASS : TestQgsEptProvider::layerTypesForUri()
PASS : TestQgsEptProvider::uriIsBlocklisted()
PASS : TestQgsEptProvider::querySublayers()
PASS : TestQgsEptProvider::brokenPath()
PASS : TestQgsEptProvider::testLazInfo()
PASS : TestQgsEptProvider::validLayer()
PASS : TestQgsEptProvider::validLayerWithEptHierarchy()
=== Received signal at function time: 0ms, total time: 352ms, dumping stack ===
=== End of stack trace ===
QFATAL : TestQgsEptProvider::attributes() Received signal 11
Function time: 0ms Total time: 352ms
FAIL! : TestQgsEptProvider::attributes() Received a fatal error.
Loc: [Unknown file(0)]
Totals: 12 passed, 1 failed, 0 skipped, 0 blacklisted, 353ms
********* Finished testing of TestQgsEptProvider *********
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 0.81 sec
The following tests FAILED:
321 - test_provider_eptprovider (Child aborted)
Errors while running CTest
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........ Passed 1.05 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 1.09 sec
```
You can see the test pass on first call, abort on second call, pass again on third call.
### Versions
Current master branch as of May 27 2022 ( 8389b2eac6e41c8092c4e7f00ac5019147faf067 )
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
test_provider_eptprovider occasionally aborts - ### What is the bug or the crash?
the test_provider_eptprovider test occasionally crashes.
This is a spin-off of issue #47395
### Steps to reproduce the issue
Run: `xvfb-run ctest --output-on-failure -R eptprovider` from the build dir, in a loop. It sometimes aborts.
Here's a live session from within a docker container from the same image as used by github-actions (started with .cir/run_tests.sh --interactive):
```
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........ Passed 1.12 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 1.15 sec
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........Child aborted***Exception: 0.78 sec
********* Start testing of TestQgsEptProvider *********
Config: Using QtTest library 5.12.8, Qt 5.12.8 (x86_64-little_endian-lp64 shared (dynamic) release build; by GCC 9.3.0)
PASS : TestQgsEptProvider::initTestCase()
PASS : TestQgsEptProvider::filters()
PASS : TestQgsEptProvider::encodeUri()
PASS : TestQgsEptProvider::decodeUri()
PASS : TestQgsEptProvider::preferredUri()
PASS : TestQgsEptProvider::layerTypesForUri()
PASS : TestQgsEptProvider::uriIsBlocklisted()
PASS : TestQgsEptProvider::querySublayers()
PASS : TestQgsEptProvider::brokenPath()
PASS : TestQgsEptProvider::testLazInfo()
PASS : TestQgsEptProvider::validLayer()
PASS : TestQgsEptProvider::validLayerWithEptHierarchy()
=== Received signal at function time: 0ms, total time: 352ms, dumping stack ===
=== End of stack trace ===
QFATAL : TestQgsEptProvider::attributes() Received signal 11
Function time: 0ms Total time: 352ms
FAIL! : TestQgsEptProvider::attributes() Received a fatal error.
Loc: [Unknown file(0)]
Totals: 12 passed, 1 failed, 0 skipped, 0 blacklisted, 353ms
********* Finished testing of TestQgsEptProvider *********
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 0.81 sec
The following tests FAILED:
321 - test_provider_eptprovider (Child aborted)
Errors while running CTest
root@d21cf98f835e:~/QGIS/build-ci# xvfb-run ctest --output-on-failure -R eptprovider
Test project /home/src/qgis/qgis/src/ci/build-ci
Start 321: test_provider_eptprovider
1/1 Test #321: test_provider_eptprovider ........ Passed 1.05 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 1.09 sec
```
You can see the test pass on first call, abort on second call, pass again on third call.
### Versions
Current master branch as of May 27 2022 ( 8389b2eac6e41c8092c4e7f00ac5019147faf067 )
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
non_defect
|
test provider eptprovider occasionally aborts what is the bug or the crash the test provider eptprovider test occasionally crashes this is a spin off of issue steps to reproduce the issue run xvfb run ctest output on failure r eptprovider from the build dir in a loop it sometimes aborts here s a live session from within a docker container from the same image as used by github actions started with cir run tests sh interactive root qgis build ci xvfb run ctest output on failure r eptprovider test project home src qgis qgis src ci build ci start test provider eptprovider test test provider eptprovider passed sec tests passed tests failed out of total test time real sec root qgis build ci xvfb run ctest output on failure r eptprovider test project home src qgis qgis src ci build ci start test provider eptprovider test test provider eptprovider child aborted exception sec start testing of testqgseptprovider config using qttest library qt little endian shared dynamic release build by gcc pass testqgseptprovider inittestcase pass testqgseptprovider filters pass testqgseptprovider encodeuri pass testqgseptprovider decodeuri pass testqgseptprovider preferreduri pass testqgseptprovider layertypesforuri pass testqgseptprovider uriisblocklisted pass testqgseptprovider querysublayers pass testqgseptprovider brokenpath pass testqgseptprovider testlazinfo pass testqgseptprovider validlayer pass testqgseptprovider validlayerwithepthierarchy received signal at function time total time dumping stack end of stack trace qfatal testqgseptprovider attributes received signal function time total time fail testqgseptprovider attributes received a fatal error loc totals passed failed skipped blacklisted finished testing of testqgseptprovider tests passed tests failed out of total test time real sec the following tests failed test provider eptprovider child aborted errors while running ctest root qgis build ci xvfb run ctest output on failure r eptprovider test project home src qgis qgis src ci build ci start test provider eptprovider test test provider eptprovider passed sec tests passed tests failed out of total test time real sec you can see the test pass on first call abort on second call pass again on third call versions current master branch as of may supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 0
|
42,523
| 11,100,301,158
|
IssuesEvent
|
2019-12-16 18:53:39
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
closed
|
[Bug] PlayerNames.lua:96: attempt to concatenate field '?' (a nil value)
|
classic defect resolved
|
**What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
v8.7.1
**What steps will reproduce the problem?**
1. Enter or exit the AV BG
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
^1^T^SGroups^T ^N1^T ^SPoint^T ^Sy^F6296178853412875 ^f-45^Sx ^F7630567975732163^f-53 ^Spoint^SBOTTOM ^SrelativePoint^SBOTTOM ^t^SScale^F8403021910245374 ^f-53^SRole ^N5^SEnabledSpecs ^T^N102^b ^N105^b ^t^SGUID^STMW:group:1TcwMa9PNfOR ^SColumns^N7 ^SName^SCat ^SIcons^T ^N1^T ^SBuffOrDebuff^SHARMFUL ^SUnit^Starget ^SType^Sbuffcheck ^SName^SRake ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SRake ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SEnabled^B ^t^N2^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SClaw ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SClaw ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N3^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SFerocious~`Bite ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBO ^SLevel^N5 ^t^N3^T ^SType^SMANAUSABLE ^SName^SFerocious~`Bite ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N4^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SRip ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBO ^SLevel^N5 ^t^N3^T ^SType^SMANAUSABLE ^SName^SRip ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SLevel^N5 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N5^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SRavage ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SREACTIVE ^t^N3^T ^SType^SMANAUSABLE ^SName^SRavage ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^SName^SRavage ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N6^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SShred ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SREACTIVE ^SName^SShred ^t^N3^T ^SType^SMANAUSABLE ^SName^SShred ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^SName^SShred ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N7^T ^SType^Scooldown ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBAT ^SLevel^N1 ^t^Sn^N2 ^t^SName^SProwl ^SGCDAsUnusable^B ^SEnabled^B ^SManaCheck^B ^t^t^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^Sn^N1 ^t^t^N2^T ^SPoint^T ^Sy^F6296169726607373 ^f-45^Sx ^F7626741045551576^f-52 ^Spoint^SBOTTOM ^SrelativePoint^SBOTTOM ^t^SScale^F8403021910245374 ^f-53^SRole ^N5^SEnabledSpecs ^T^N102^b ^N105^b ^t^SGUID^STMW:group:1TcwEi81P6iI ^SColumns^N6 ^SName^SBear ^SIcons^T ^N1^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SMaul ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SMaul ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N2^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SSwipe ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SSwipe ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N3^T ^SBuffOrDebuff^SHARMFUL ^SUnit^Starget ^SType^Sbuffcheck ^SName^SDemoralizing~`Roar ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SDemoralizing~`Roar ^t^Sn^N2 ^t^SEnabled^B ^t^N4^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SFeral~`Charge ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S>= ^SUnit^Starget ^SLevel^N8 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N25 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SFeral~`Charge ^t^N5^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SGrowl ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SGrowl ^t^N6^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SBash ^t^N3^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N3 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SBash ^t^t^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^Sn^N1 ^t^t^N3^T ^SGUID^STMW:group:1TynPLOsliC7 ^SEnabled^b ^t^t^SNumGroups^N3 ^SVersion^N87101 ^SLocked^B ^t^N87101^S~`~| ^Sprofile^STalaran~`-~`Myzrael ^^
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
4x ...ns\TellMeWhen\Components\Core\Common\PlayerNames.lua:96: attempt to concatenate field '?' (a nil value)
...ns\TellMeWhen\Components\Core\Common\PlayerNames.lua:96: in function `?'
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:119: in function <...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0.lua:119>
[C]: ?
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:29: in function <...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0.lua:25>
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:64: in function `Fire'
...\common\Wildpants\libs\AceEvent-3.0\AceEvent-3.0-4.lua:120: in function <...\common\Wildpants\libs\AceEvent-3.0\AceEvent-3.0.lua:119>
[C]: in function `SetBattlefieldScoreFaction'
FrameXML\WorldStateFrame.lua:379: in function `WorldStateScoreFrameTab_OnClick'
FrameXML\WorldStateFrame.lua:49: in function <FrameXML\WorldStateFrame.lua:46>
[C]: in function `Show'
FrameXML\UIParent.lua:2087: in function `SetUIPanel'
FrameXML\UIParent.lua:1893: in function `ShowUIPanel'
FrameXML\UIParent.lua:1793: in function <FrameXML\UIParent.lua:1789>
[C]: in function `SetAttribute'
FrameXML\UIParent.lua:2535: in function `ShowUIPanel'
FrameXML\WorldStateFrame.lua:80: in function `WorldStateScoreFrame_Update'
FrameXML\WorldStateFrame.lua:36: in function <FrameXML\WorldStateFrame.lua:32>
|
1.0
|
[Bug] PlayerNames.lua:96: attempt to concatenate field '?' (a nil value) - **What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
v8.7.1
**What steps will reproduce the problem?**
1. Enter or exit the AV BG
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
^1^T^SGroups^T ^N1^T ^SPoint^T ^Sy^F6296178853412875 ^f-45^Sx ^F7630567975732163^f-53 ^Spoint^SBOTTOM ^SrelativePoint^SBOTTOM ^t^SScale^F8403021910245374 ^f-53^SRole ^N5^SEnabledSpecs ^T^N102^b ^N105^b ^t^SGUID^STMW:group:1TcwMa9PNfOR ^SColumns^N7 ^SName^SCat ^SIcons^T ^N1^T ^SBuffOrDebuff^SHARMFUL ^SUnit^Starget ^SType^Sbuffcheck ^SName^SRake ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SRake ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SEnabled^B ^t^N2^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SClaw ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SClaw ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N3^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SFerocious~`Bite ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBO ^SLevel^N5 ^t^N3^T ^SType^SMANAUSABLE ^SName^SFerocious~`Bite ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N4^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SRip ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBO ^SLevel^N5 ^t^N3^T ^SType^SMANAUSABLE ^SName^SRip ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SLevel^N5 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N5^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SRavage ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SREACTIVE ^t^N3^T ^SType^SMANAUSABLE ^SName^SRavage ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^SName^SRavage ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N6^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SShred ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SREACTIVE ^SName^SShred ^t^N3^T ^SType^SMANAUSABLE ^SName^SShred ^t^N4^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^SName^SShred ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N7^T ^SType^Scooldown ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^N2^T ^SType^SCOMBAT ^SLevel^N1 ^t^Sn^N2 ^t^SName^SProwl ^SGCDAsUnusable^B ^SEnabled^B ^SManaCheck^B ^t^t^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SCat~`Form ^t^Sn^N1 ^t^t^N2^T ^SPoint^T ^Sy^F6296169726607373 ^f-45^Sx ^F7626741045551576^f-52 ^Spoint^SBOTTOM ^SrelativePoint^SBOTTOM ^t^SScale^F8403021910245374 ^f-53^SRole ^N5^SEnabledSpecs ^T^N102^b ^N105^b ^t^SGUID^STMW:group:1TcwEi81P6iI ^SColumns^N6 ^SName^SBear ^SIcons^T ^N1^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SMaul ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SMaul ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N2^T ^SType^Scooldown ^SGCDAsUnusable^B ^SName^SSwipe ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SSwipe ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^t^N3^T ^SBuffOrDebuff^SHARMFUL ^SUnit^Starget ^SType^Sbuffcheck ^SName^SDemoralizing~`Roar ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SDemoralizing~`Roar ^t^Sn^N2 ^t^SEnabled^B ^t^N4^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SFeral~`Charge ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S>= ^SUnit^Starget ^SLevel^N8 ^t^N5^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N25 ^t^Sn^N5 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SFeral~`Charge ^t^N5^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SGrowl ^t^N3^T ^SType^SREACT ^SUnit^Starget ^SLevel^N1 ^t^N4^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N4 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SGrowl ^t^N6^T ^SType^Scooldown ^SShowTimerText^B ^SGCDAsUnusable^B ^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^N2^T ^SType^SMANAUSABLE ^SName^SBash ^t^N3^T ^SType^SLIBRANGECHECK ^SOperator^S<= ^SUnit^Starget ^SLevel^N5 ^t^Sn^N3 ^t^SRangeCheck^B ^SEnabled^B ^SManaCheck^B ^SName^SBash ^t^t^SConditions^T ^N1^T ^SType^SSTANCE ^SName^SBear~`Form ^t^Sn^N1 ^t^t^N3^T ^SGUID^STMW:group:1TynPLOsliC7 ^SEnabled^b ^t^t^SNumGroups^N3 ^SVersion^N87101 ^SLocked^B ^t^N87101^S~`~| ^Sprofile^STalaran~`-~`Myzrael ^^
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
4x ...ns\TellMeWhen\Components\Core\Common\PlayerNames.lua:96: attempt to concatenate field '?' (a nil value)
...ns\TellMeWhen\Components\Core\Common\PlayerNames.lua:96: in function `?'
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:119: in function <...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0.lua:119>
[C]: ?
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:29: in function <...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0.lua:25>
...las\Libs\CallbackHandler-1.0\CallbackHandler-1.0-7.lua:64: in function `Fire'
...\common\Wildpants\libs\AceEvent-3.0\AceEvent-3.0-4.lua:120: in function <...\common\Wildpants\libs\AceEvent-3.0\AceEvent-3.0.lua:119>
[C]: in function `SetBattlefieldScoreFaction'
FrameXML\WorldStateFrame.lua:379: in function `WorldStateScoreFrameTab_OnClick'
FrameXML\WorldStateFrame.lua:49: in function <FrameXML\WorldStateFrame.lua:46>
[C]: in function `Show'
FrameXML\UIParent.lua:2087: in function `SetUIPanel'
FrameXML\UIParent.lua:1893: in function `ShowUIPanel'
FrameXML\UIParent.lua:1793: in function <FrameXML\UIParent.lua:1789>
[C]: in function `SetAttribute'
FrameXML\UIParent.lua:2535: in function `ShowUIPanel'
FrameXML\WorldStateFrame.lua:80: in function `WorldStateScoreFrame_Update'
FrameXML\WorldStateFrame.lua:36: in function <FrameXML\WorldStateFrame.lua:32>
|
defect
|
playernames lua attempt to concatenate field a nil value what version of tellmewhen are you using what steps will reproduce the problem enter or exit the av bg what do you expect to happen what happens instead screenshots and export strings if your issue pertains to a specific icon or group please post the relevant export string s to get an export string open the icon editor and click the button labeled import export backup select the to string option for the appropriate export type icon group or profile and then press ctrl c to copy it to your clipboard additionally if applicable add screenshots to help explain your problem you can paste images directly into github issues or you can upload files as well t sgroups t t spoint t sy f sx f spoint sbottom srelativepoint sbottom t sscale f srole senabledspecs t b b t sguid stmw group scolumns sname scat sicons t t sbuffordebuff sharmful sunit starget stype sbuffcheck sname srake sconditions t t stype sstance sname scat form t t stype smanausable sname srake t t stype sreact sunit starget slevel t t stype slibrangecheck soperator s sunit starget slevel t t stype slibrangecheck soperator s sunit starget slevel t sn t srangecheck b senabled b smanacheck b sname sferal charge t t stype scooldown sshowtimertext b sgcdasunusable b sconditions t t stype sstance sname sbear form t t stype smanausable sname sgrowl t t stype sreact sunit starget slevel t t stype slibrangecheck soperator s sunit starget slevel t sn t srangecheck b senabled b smanacheck b sname sgrowl t t stype scooldown sshowtimertext b sgcdasunusable b sconditions t t stype sstance sname sbear form t t stype smanausable sname sbash t t stype slibrangecheck soperator s sunit starget slevel t sn t srangecheck b senabled b smanacheck b sname sbash t t sconditions t t stype sstance sname sbear form t sn t t t sguid stmw group senabled b t t snumgroups sversion slocked b t s sprofile stalaran myzrael additional info ns tellmewhen components core common playernames lua attempt to concatenate field a nil value ns tellmewhen components core common playernames lua in function las libs callbackhandler callbackhandler lua in function las libs callbackhandler callbackhandler lua in function las libs callbackhandler callbackhandler lua in function fire common wildpants libs aceevent aceevent lua in function in function setbattlefieldscorefaction framexml worldstateframe lua in function worldstatescoreframetab onclick framexml worldstateframe lua in function in function show framexml uiparent lua in function setuipanel framexml uiparent lua in function showuipanel framexml uiparent lua in function in function setattribute framexml uiparent lua in function showuipanel framexml worldstateframe lua in function worldstatescoreframe update framexml worldstateframe lua in function
| 1
|
34,945
| 7,473,392,414
|
IssuesEvent
|
2018-04-03 15:15:01
|
contao/manager-bundle
|
https://api.github.com/repos/contao/manager-bundle
|
closed
|
Installation of legacy modules under symfony 3.4.7 fails
|
defect
|
Symfony 3.4.7 was released 3 hours ago. When using legacy modules we get the following error on running composer update:
```
...
Compiling component files
> Contao\ManagerBundle\Composer\ScriptHandler::initializeApplication
In ContaoModuleBundle.php line 36:
The module folder "system/modules/contao-filecredits" does not exist.
Script Contao\ManagerBundle\Composer\ScriptHandler::initializeApplication handling the post-update-cmd event terminated with an exception
[RuntimeException]
An error occurred while executing the "contao:install-web-dir" command:
In ContaoModuleBundle.php line 36:
The module folder "system/modules/contao-filecredits" does not exist.
```
With Symfony 3.4.6 everything works fine.
The module name seems to be wrong and changed from 3.4.6 to 3.4.7:
- 3.4.6: system/modules/filecredits
- 3.4.7: system/modules/contao-filecredits
|
1.0
|
Installation of legacy modules under symfony 3.4.7 fails - Symfony 3.4.7 was released 3 hours ago. When using legacy modules we get the following error on running composer update:
```
...
Compiling component files
> Contao\ManagerBundle\Composer\ScriptHandler::initializeApplication
In ContaoModuleBundle.php line 36:
The module folder "system/modules/contao-filecredits" does not exist.
Script Contao\ManagerBundle\Composer\ScriptHandler::initializeApplication handling the post-update-cmd event terminated with an exception
[RuntimeException]
An error occurred while executing the "contao:install-web-dir" command:
In ContaoModuleBundle.php line 36:
The module folder "system/modules/contao-filecredits" does not exist.
```
With Symfony 3.4.6 everything works fine.
The module name seems to be wrong and changed from 3.4.6 to 3.4.7:
- 3.4.6: system/modules/filecredits
- 3.4.7: system/modules/contao-filecredits
|
defect
|
installation of legacy modules under symfony fails symfony was released hours ago when using legacy modules we get the following error on running composer update compiling component files contao managerbundle composer scripthandler initializeapplication in contaomodulebundle php line the module folder system modules contao filecredits does not exist script contao managerbundle composer scripthandler initializeapplication handling the post update cmd event terminated with an exception an error occurred while executing the contao install web dir command in contaomodulebundle php line the module folder system modules contao filecredits does not exist with symfony everything works fine the module name seems to be wrong and changed from to system modules filecredits system modules contao filecredits
| 1
|
46,148
| 5,790,602,119
|
IssuesEvent
|
2017-05-02 01:19:52
|
chartjs/Chart.js
|
https://api.github.com/repos/chartjs/Chart.js
|
closed
|
v2.5.0 performance issue?
|
Category: Bug Needs Investigation Needs test case Performance
|
Hi,
My simple app (angular2) is noticeable slower (loading?/rendering? not sure) after updating to 2.5.0. Anyone experiencing the same issue?
I'll try to put together a jsfiddle.
|
1.0
|
v2.5.0 performance issue? - Hi,
My simple app (angular2) is noticeable slower (loading?/rendering? not sure) after updating to 2.5.0. Anyone experiencing the same issue?
I'll try to put together a jsfiddle.
|
non_defect
|
performance issue hi my simple app is noticeable slower loading rendering not sure after updating to anyone experiencing the same issue i ll try to put together a jsfiddle
| 0
|
75,391
| 14,445,145,330
|
IssuesEvent
|
2020-12-07 22:25:38
|
SecretFoundation/SecretWebsite
|
https://api.github.com/repos/SecretFoundation/SecretWebsite
|
closed
|
The headers are too close in size here on the blog posts
|
bug dev / code
|
Adjust the subheader to be even smaller.

|
1.0
|
The headers are too close in size here on the blog posts - Adjust the subheader to be even smaller.

|
non_defect
|
the headers are too close in size here on the blog posts adjust the subheader to be even smaller
| 0
|
73,816
| 24,812,384,101
|
IssuesEvent
|
2022-10-25 10:26:33
|
jpcsp/jpcsp
|
https://api.github.com/repos/jpcsp/jpcsp
|
closed
|
jpcsp not working in win7 with java7 update 5(jre)
|
Type-Defect Priority-Medium auto-migrated
|
```
when i start jpcsp in my new win7 system it close automatically without showing
error(in console window) i update "java 7 updated version 5"
it not working.(by my basic knowledge i found that error JVM creation and java
not found)but i install java and check in many site that conform java install.
i think it can't found path of jre and hot key registry store in win7.thanks
for creating and developing jpcsp.
```
Original issue reported on code.google.com by `srihari2...@gmail.com` on 31 Jul 2012 at 5:11
|
1.0
|
jpcsp not working in win7 with java7 update 5(jre) - ```
when i start jpcsp in my new win7 system it close automatically without showing
error(in console window) i update "java 7 updated version 5"
it not working.(by my basic knowledge i found that error JVM creation and java
not found)but i install java and check in many site that conform java install.
i think it can't found path of jre and hot key registry store in win7.thanks
for creating and developing jpcsp.
```
Original issue reported on code.google.com by `srihari2...@gmail.com` on 31 Jul 2012 at 5:11
|
defect
|
jpcsp not working in with update jre when i start jpcsp in my new system it close automatically without showing error in console window i update java updated version it not working by my basic knowledge i found that error jvm creation and java not found but i install java and check in many site that conform java install i think it can t found path of jre and hot key registry store in thanks for creating and developing jpcsp original issue reported on code google com by gmail com on jul at
| 1
|
75,041
| 25,497,019,882
|
IssuesEvent
|
2022-11-27 20:02:53
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
closed
|
[Bug]:
|
T: defect S: more-info-needed
|
### WoW Version
Retail
### TellMeWhen Version
10.0.1
### Describe the bug
I tracked Windfury Totem through TMW on my shaman, but since prepatch I can no longer click on the Windury Totem Icon on screen when opening TMW options. The icon doesn't even disappear after disabling nor deleting the whole addon. It does still work though, and disappears whenever I drop the totem and reappears when out of range or after the 2 minuten cd window. I don't have this problem on any other character and can't access the icon, nor move it or anything else. It is just there, altough it is nowhere shown in the options.
I also can't export a string as it literally doesn't show up on the addon.
### Export Strings
```text
N/A
```
|
1.0
|
[Bug]: - ### WoW Version
Retail
### TellMeWhen Version
10.0.1
### Describe the bug
I tracked Windfury Totem through TMW on my shaman, but since prepatch I can no longer click on the Windury Totem Icon on screen when opening TMW options. The icon doesn't even disappear after disabling nor deleting the whole addon. It does still work though, and disappears whenever I drop the totem and reappears when out of range or after the 2 minuten cd window. I don't have this problem on any other character and can't access the icon, nor move it or anything else. It is just there, altough it is nowhere shown in the options.
I also can't export a string as it literally doesn't show up on the addon.
### Export Strings
```text
N/A
```
|
defect
|
wow version retail tellmewhen version describe the bug i tracked windfury totem through tmw on my shaman but since prepatch i can no longer click on the windury totem icon on screen when opening tmw options the icon doesn t even disappear after disabling nor deleting the whole addon it does still work though and disappears whenever i drop the totem and reappears when out of range or after the minuten cd window i don t have this problem on any other character and can t access the icon nor move it or anything else it is just there altough it is nowhere shown in the options i also can t export a string as it literally doesn t show up on the addon export strings text n a
| 1
|
79,309
| 28,097,110,866
|
IssuesEvent
|
2023-03-30 16:33:34
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[Other] No VA.gov Experience Standard for the issue found. (00.00.1)
|
508/Accessibility 508-defect-2 Supplemental Claims collab-cycle-feedback Staging CCIssue00.00 CC-Dashboard benefits-team-1
|
### General Information
#### VFS team name
Benefits Team 1
#### VFS product name
Supplemental Claims Form 20-0995
#### VFS feature name
Supplemental Claims Form 20-0995
#### Point of Contact/Reviewers
Brian DeConinck - @briandeconinck - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
No VA.gov Experience Standard for the issue found.
### Issue Details
On [Step 2 of 4: Issues for review - Select the issues you'd like us to review, when a user adds a new issue and [Step 3 of 4: New and relevant evidence - Confirm or edit your evidence], the Remove button is a single-click destructive action. When clicking Remove, the item is removed without any user confirmation. Additionally, there's nothing announced by screen readers to indicate that a change has occurred --- focus just moves to the next focusable element.
It's easy to imagine some scenarios where a user might accidentally trigger the Remove button and not realize that they did it. Screen reader users may hit the wrong key but think they hit [Tab]. A low-vision user on a touch device might pinch-to-zoom and accidentally tap the Remove button in the process, and not see the issue disappear.
Since users get an opportunity to review everything prior to submission, this is probably not a violation of experience standard 03.09 or [WCAG 3.3.4](https://www.w3.org/WAI/WCAG21/Understanding/error-prevention-legal-financial-data.html). But I do think it's in the spirit of WCAG 3.3.4.
A similar issue exists on Step 3 of 4: New and relevant evidence - Upload your supporting evidence, which follows the [design system Files pattern](https://design.va.gov/patterns/ask-users-for/files). I'll be filing an issue with the DST to address this in that pattern as well.
### Link, screenshot or steps to recreate
### VA.gov Experience Standard
[Category Number 00, Issue Number 00](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
### Platform Recommendation
Two recommended changes:
1. Some kind of status message that receives focus when an issue has been removed, similar to the background-only alerts in use when changing contact information. If a visible status message isn't practical, at least some `aria-live` update should be announced to screen reader users.
2. Some kind of intermediate confirmation step between a user clicking Remove and the issue actually being removed. I hate to recommend a modal for anything ever, but maybe a modal?
---
### VFS Guidance
- Close the ticket when the issue has been resolved or validated by your Product Owner
- If your team has additional questions or needs Platform help validating the issue, please comment on the ticket
- Some feedback provided may be out of scope for your iteration of the product, however, Platform's OCTO leadership has stated that all identified issues need to be documented and it is still your responsibility to resolve the issue.
- If you do not believe that this Staging Review issue ticket is the responsibility of your team, comment below providing an explanation and who you believe is responsible. Please tag the Point of Contact/Reviewers. Governance team will research and will follow up.
|
1.0
|
[Other] No VA.gov Experience Standard for the issue found. (00.00.1) - ### General Information
#### VFS team name
Benefits Team 1
#### VFS product name
Supplemental Claims Form 20-0995
#### VFS feature name
Supplemental Claims Form 20-0995
#### Point of Contact/Reviewers
Brian DeConinck - @briandeconinck - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
No VA.gov Experience Standard for the issue found.
### Issue Details
On [Step 2 of 4: Issues for review - Select the issues you'd like us to review, when a user adds a new issue and [Step 3 of 4: New and relevant evidence - Confirm or edit your evidence], the Remove button is a single-click destructive action. When clicking Remove, the item is removed without any user confirmation. Additionally, there's nothing announced by screen readers to indicate that a change has occurred --- focus just moves to the next focusable element.
It's easy to imagine some scenarios where a user might accidentally trigger the Remove button and not realize that they did it. Screen reader users may hit the wrong key but think they hit [Tab]. A low-vision user on a touch device might pinch-to-zoom and accidentally tap the Remove button in the process, and not see the issue disappear.
Since users get an opportunity to review everything prior to submission, this is probably not a violation of experience standard 03.09 or [WCAG 3.3.4](https://www.w3.org/WAI/WCAG21/Understanding/error-prevention-legal-financial-data.html). But I do think it's in the spirit of WCAG 3.3.4.
A similar issue exists on Step 3 of 4: New and relevant evidence - Upload your supporting evidence, which follows the [design system Files pattern](https://design.va.gov/patterns/ask-users-for/files). I'll be filing an issue with the DST to address this in that pattern as well.
### Link, screenshot or steps to recreate
### VA.gov Experience Standard
[Category Number 00, Issue Number 00](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
### Platform Recommendation
Two recommended changes:
1. Some kind of status message that receives focus when an issue has been removed, similar to the background-only alerts in use when changing contact information. If a visible status message isn't practical, at least some `aria-live` update should be announced to screen reader users.
2. Some kind of intermediate confirmation step between a user clicking Remove and the issue actually being removed. I hate to recommend a modal for anything ever, but maybe a modal?
---
### VFS Guidance
- Close the ticket when the issue has been resolved or validated by your Product Owner
- If your team has additional questions or needs Platform help validating the issue, please comment on the ticket
- Some feedback provided may be out of scope for your iteration of the product, however, Platform's OCTO leadership has stated that all identified issues need to be documented and it is still your responsibility to resolve the issue.
- If you do not believe that this Staging Review issue ticket is the responsibility of your team, comment below providing an explanation and who you believe is responsible. Please tag the Point of Contact/Reviewers. Governance team will research and will follow up.
|
defect
|
no va gov experience standard for the issue found general information vfs team name benefits team vfs product name supplemental claims form vfs feature name supplemental claims form point of contact reviewers brian deconinck briandeconinck accessibility for more information on how to interpret this ticket please refer to the guidance on platform website platform issue no va gov experience standard for the issue found issue details on the remove button is a single click destructive action when clicking remove the item is removed without any user confirmation additionally there s nothing announced by screen readers to indicate that a change has occurred focus just moves to the next focusable element it s easy to imagine some scenarios where a user might accidentally trigger the remove button and not realize that they did it screen reader users may hit the wrong key but think they hit a low vision user on a touch device might pinch to zoom and accidentally tap the remove button in the process and not see the issue disappear since users get an opportunity to review everything prior to submission this is probably not a violation of experience standard or but i do think it s in the spirit of wcag a similar issue exists on step of new and relevant evidence upload your supporting evidence which follows the i ll be filing an issue with the dst to address this in that pattern as well link screenshot or steps to recreate va gov experience standard other references platform recommendation two recommended changes some kind of status message that receives focus when an issue has been removed similar to the background only alerts in use when changing contact information if a visible status message isn t practical at least some aria live update should be announced to screen reader users some kind of intermediate confirmation step between a user clicking remove and the issue actually being removed i hate to recommend a modal for anything ever but maybe a modal vfs guidance close the ticket when the issue has been resolved or validated by your product owner if your team has additional questions or needs platform help validating the issue please comment on the ticket some feedback provided may be out of scope for your iteration of the product however platform s octo leadership has stated that all identified issues need to be documented and it is still your responsibility to resolve the issue if you do not believe that this staging review issue ticket is the responsibility of your team comment below providing an explanation and who you believe is responsible please tag the point of contact reviewers governance team will research and will follow up
| 1
|
80,336
| 30,239,720,340
|
IssuesEvent
|
2023-07-06 12:46:45
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Hazelcast Kubernetes clustering failure after upgrade form 5.1.3 to 5.1.6
|
Type: Defect
|
After 5.1.6, peer to peer clustering stopped working.
Looks like port 8080 is being used to reach out to peer node
**Sending master question to [peer_ip_address]:8080
Connection to: [peer_ip_address]:8080 streamId:-1 is not yet in progress
[peer_ip_address]:8080 is added to the blacklist.**
With version 5.1.3, correct port 5701 was being used
**Sending master question to [peer_ip_address]:5701
Connection to: [peer_ip_address]:5701 streamId:-1 is not yet in progress
Established socket connection between /my_ip_address:51745 and /peer_ip_address:5701**
What changed ?
How do I change the config to use the correct port ?
|
1.0
|
Hazelcast Kubernetes clustering failure after upgrade form 5.1.3 to 5.1.6 - After 5.1.6, peer to peer clustering stopped working.
Looks like port 8080 is being used to reach out to peer node
**Sending master question to [peer_ip_address]:8080
Connection to: [peer_ip_address]:8080 streamId:-1 is not yet in progress
[peer_ip_address]:8080 is added to the blacklist.**
With version 5.1.3, correct port 5701 was being used
**Sending master question to [peer_ip_address]:5701
Connection to: [peer_ip_address]:5701 streamId:-1 is not yet in progress
Established socket connection between /my_ip_address:51745 and /peer_ip_address:5701**
What changed ?
How do I change the config to use the correct port ?
|
defect
|
hazelcast kubernetes clustering failure after upgrade form to after peer to peer clustering stopped working looks like port is being used to reach out to peer node sending master question to connection to streamid is not yet in progress is added to the blacklist with version correct port was being used sending master question to connection to streamid is not yet in progress established socket connection between my ip address and peer ip address what changed how do i change the config to use the correct port
| 1
|
72,989
| 19,541,238,181
|
IssuesEvent
|
2022-01-01 00:11:38
|
apache/camel-k
|
https://api.github.com/repos/apache/camel-k
|
closed
|
Kaniko in builder pod failed to push image to docker registry
|
area/build-system status/stale
|
I am getting below error in builder pod, where Kaniko failed to push the images to local registry. please help to solve this issue.
camel-k installation command:
kamel install --registry-insecure true --registry 10.35.110.195:5000 --http-proxy-secret kamel-proxy --maven-settings=configmap:maven-settings/settings.xml --kaniko-build-cache=false -n service-integration
creating simple Hello world java camel integration.
kamel run -n service-integration Sample.java
Error in pod:-
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "10.35.110.195:5000/service-integration/camel-k-kit-bp62q43fet3g2cld1sgg:44220091": creating push check transport for 10.35.110.195:5000 failed: Get http://10.35.110.195:5000/v2/: dial tcp 10.35.110.195:5000: connect: connection refused
Br,
Tanmoy
|
1.0
|
Kaniko in builder pod failed to push image to docker registry - I am getting below error in builder pod, where Kaniko failed to push the images to local registry. please help to solve this issue.
camel-k installation command:
kamel install --registry-insecure true --registry 10.35.110.195:5000 --http-proxy-secret kamel-proxy --maven-settings=configmap:maven-settings/settings.xml --kaniko-build-cache=false -n service-integration
creating simple Hello world java camel integration.
kamel run -n service-integration Sample.java
Error in pod:-
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "10.35.110.195:5000/service-integration/camel-k-kit-bp62q43fet3g2cld1sgg:44220091": creating push check transport for 10.35.110.195:5000 failed: Get http://10.35.110.195:5000/v2/: dial tcp 10.35.110.195:5000: connect: connection refused
Br,
Tanmoy
|
non_defect
|
kaniko in builder pod failed to push image to docker registry i am getting below error in builder pod where kaniko failed to push the images to local registry please help to solve this issue camel k installation command kamel install registry insecure true registry http proxy secret kamel proxy maven settings configmap maven settings settings xml kaniko build cache false n service integration creating simple hello world java camel integration kamel run n service integration sample java error in pod error checking push permissions make sure you entered the correct tag name and that you are authenticated correctly and try again checking push permission for service integration camel k kit creating push check transport for failed get dial tcp connect connection refused br tanmoy
| 0
|
619,893
| 19,539,066,748
|
IssuesEvent
|
2021-12-31 15:21:27
|
tinkerbell/osie
|
https://api.github.com/repos/tinkerbell/osie
|
closed
|
Debug aarch64 vm based power-on/phone-home tests
|
kind/bug priority/backlog
|
The qemu-system-aarch64 user-mode emulation based boot/phone-home tests from #227 don't work. There's no serial output so its hard to debug atm. Is the test giving up too early? Is something done wrong with qemu? Maybe the aarch64 uefi can't boot from virtio-disks (:thinking:). Need to try the vnc graphical output, maybe there's data there...
## Expected Behaviour
test_boot_and_phone_home works for aarch64 VMs
## Current Behaviour
test_boot_and_phone_home always fails
|
1.0
|
Debug aarch64 vm based power-on/phone-home tests - The qemu-system-aarch64 user-mode emulation based boot/phone-home tests from #227 don't work. There's no serial output so its hard to debug atm. Is the test giving up too early? Is something done wrong with qemu? Maybe the aarch64 uefi can't boot from virtio-disks (:thinking:). Need to try the vnc graphical output, maybe there's data there...
## Expected Behaviour
test_boot_and_phone_home works for aarch64 VMs
## Current Behaviour
test_boot_and_phone_home always fails
|
non_defect
|
debug vm based power on phone home tests the qemu system user mode emulation based boot phone home tests from don t work there s no serial output so its hard to debug atm is the test giving up too early is something done wrong with qemu maybe the uefi can t boot from virtio disks thinking need to try the vnc graphical output maybe there s data there expected behaviour test boot and phone home works for vms current behaviour test boot and phone home always fails
| 0
|
399,236
| 27,233,171,658
|
IssuesEvent
|
2023-02-21 14:37:04
|
reliatec-gmbh/LibreClinica
|
https://api.github.com/repos/reliatec-gmbh/LibreClinica
|
closed
|
Update installation manual
|
documentation
|
**Description:**
It was reported #362 that there is one configuration step missing in installation manual. I am creating ticket for this so that we do not forget to update the manual in next release.
|
1.0
|
Update installation manual - **Description:**
It was reported #362 that there is one configuration step missing in installation manual. I am creating ticket for this so that we do not forget to update the manual in next release.
|
non_defect
|
update installation manual description it was reported that there is one configuration step missing in installation manual i am creating ticket for this so that we do not forget to update the manual in next release
| 0
|
44,174
| 9,547,581,728
|
IssuesEvent
|
2019-05-02 00:07:24
|
toumangg/problemsolving
|
https://api.github.com/repos/toumangg/problemsolving
|
closed
|
LeetCode problem 208 Implement Trie (Prefix Tree)
|
leetcode
|
Implement a trie with insert, search, and startsWith methods.
Example:
PrefixTree tree = new PrefixTree();
tree.insert("apple");
tree.search("apple"); // returns true
tree.search("app"); // returns false
tree.startsWith("app"); // returns true
tree.insert("app");
tree.search("app"); // returns true
Note:
You may assume that all inputs are consist of lowercase letters a-z.
All inputs are guaranteed to be non-empty strings.
|
1.0
|
LeetCode problem 208 Implement Trie (Prefix Tree) - Implement a trie with insert, search, and startsWith methods.
Example:
PrefixTree tree = new PrefixTree();
tree.insert("apple");
tree.search("apple"); // returns true
tree.search("app"); // returns false
tree.startsWith("app"); // returns true
tree.insert("app");
tree.search("app"); // returns true
Note:
You may assume that all inputs are consist of lowercase letters a-z.
All inputs are guaranteed to be non-empty strings.
|
non_defect
|
leetcode problem implement trie prefix tree implement a trie with insert search and startswith methods example prefixtree tree new prefixtree tree insert apple tree search apple returns true tree search app returns false tree startswith app returns true tree insert app tree search app returns true note you may assume that all inputs are consist of lowercase letters a z all inputs are guaranteed to be non empty strings
| 0
|
187,532
| 6,758,748,688
|
IssuesEvent
|
2017-10-24 15:03:10
|
marty30/Drones-Simulator
|
https://api.github.com/repos/marty30/Drones-Simulator
|
closed
|
Add game mode (TEAMPLAY)
|
major priority
|
In this issue you should be able to pick a game mode and this should be reflected in the game visualisation
|
1.0
|
Add game mode (TEAMPLAY) - In this issue you should be able to pick a game mode and this should be reflected in the game visualisation
|
non_defect
|
add game mode teamplay in this issue you should be able to pick a game mode and this should be reflected in the game visualisation
| 0
|
68,841
| 21,921,189,782
|
IssuesEvent
|
2022-05-22 15:44:43
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Large space hierarchies get badly horizontally truncated by not being able to resize the SpacePanel
|
T-Defect
|
### Steps to reproduce
1. Create a large space hierarchy, 4-5 levels deep
2. Join the spaces & subspaces
3. Observe that the names of the subspaces get badly truncated by the SpacePanel only being 250px wide
4. Have a claustrophobic panic attack
5. Manually override `.mx_SpacePanel .mx_SpaceTreeLevel { max-width }` to be 350px wide
6. Wish that you could resize the SpacePanel by dragging, like you can the LeftPanel & RightPanel.
### Outcome
#### What did you expect?
Resizable SpacePanel
#### What happened instead?
claustrophobia
### Operating system
macOS
### Browser information
nightly
### URL for webapp
_No response_
### Application version
nightly
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Large space hierarchies get badly horizontally truncated by not being able to resize the SpacePanel - ### Steps to reproduce
1. Create a large space hierarchy, 4-5 levels deep
2. Join the spaces & subspaces
3. Observe that the names of the subspaces get badly truncated by the SpacePanel only being 250px wide
4. Have a claustrophobic panic attack
5. Manually override `.mx_SpacePanel .mx_SpaceTreeLevel { max-width }` to be 350px wide
6. Wish that you could resize the SpacePanel by dragging, like you can the LeftPanel & RightPanel.
### Outcome
#### What did you expect?
Resizable SpacePanel
#### What happened instead?
claustrophobia
### Operating system
macOS
### Browser information
nightly
### URL for webapp
_No response_
### Application version
nightly
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
large space hierarchies get badly horizontally truncated by not being able to resize the spacepanel steps to reproduce create a large space hierarchy levels deep join the spaces subspaces observe that the names of the subspaces get badly truncated by the spacepanel only being wide have a claustrophobic panic attack manually override mx spacepanel mx spacetreelevel max width to be wide wish that you could resize the spacepanel by dragging like you can the leftpanel rightpanel outcome what did you expect resizable spacepanel what happened instead claustrophobia operating system macos browser information nightly url for webapp no response application version nightly homeserver matrix org will you send logs no
| 1
|
393,010
| 26,968,010,585
|
IssuesEvent
|
2023-02-09 00:46:58
|
MSU-CS4360-WhiteHat/ecommerce-scam-detector
|
https://api.github.com/repos/MSU-CS4360-WhiteHat/ecommerce-scam-detector
|
closed
|
Finish filling out Risks on the project proposal
|
documentation enhancement
|
We need to finish filling out the Risks on the project proposal.
|
1.0
|
Finish filling out Risks on the project proposal - We need to finish filling out the Risks on the project proposal.
|
non_defect
|
finish filling out risks on the project proposal we need to finish filling out the risks on the project proposal
| 0
|
740,771
| 25,766,820,188
|
IssuesEvent
|
2022-12-09 03:01:09
|
steedos/steedos-platform
|
https://api.github.com/repos/steedos/steedos-platform
|
closed
|
小铃铛通知中用户头像是坏的
|
done priority: High
|
<img width="1920" alt="image" src="https://user-images.githubusercontent.com/26241897/205594598-f12e3867-d235-432f-b64e-f3ac86466774.png">
ldx排查结果:
<img width="1077" alt="image" src="https://user-images.githubusercontent.com/41402189/205827021-39a97a42-5ec0-40a8-8530-39c8dd09080a.png">
此外
<img width="856" alt="image" src="https://user-images.githubusercontent.com/41402189/205826775-7453d569-8a81-47f5-bb16-4f14a3c0c30c.png">
|
1.0
|
小铃铛通知中用户头像是坏的 - <img width="1920" alt="image" src="https://user-images.githubusercontent.com/26241897/205594598-f12e3867-d235-432f-b64e-f3ac86466774.png">
ldx排查结果:
<img width="1077" alt="image" src="https://user-images.githubusercontent.com/41402189/205827021-39a97a42-5ec0-40a8-8530-39c8dd09080a.png">
此外
<img width="856" alt="image" src="https://user-images.githubusercontent.com/41402189/205826775-7453d569-8a81-47f5-bb16-4f14a3c0c30c.png">
|
non_defect
|
小铃铛通知中用户头像是坏的 img width alt image src ldx排查结果: img width alt image src 此外 img width alt image src
| 0
|
46,436
| 13,055,911,853
|
IssuesEvent
|
2020-07-30 03:05:54
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[wavereform] no tests (Trac #1196)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1196
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "No tests. Add some in `resources/test`.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[wavereform] no tests",
"priority": "critical",
"keywords": "",
"time": "2015-08-19T18:07:37",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
|
1.0
|
[wavereform] no tests (Trac #1196) - Migrated from https://code.icecube.wisc.edu/ticket/1196
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "No tests. Add some in `resources/test`.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[wavereform] no tests",
"priority": "critical",
"keywords": "",
"time": "2015-08-19T18:07:37",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
|
defect
|
no tests trac migrated from json status closed changetime description no tests add some in resources test reporter david schultz cc resolution fixed ts component combo reconstruction summary no tests priority critical keywords time milestone owner jbraun type defect
| 1
|
8,678
| 5,914,731,366
|
IssuesEvent
|
2017-05-22 04:39:51
|
broesamle/clip_8
|
https://api.github.com/repos/broesamle/clip_8
|
closed
|
larger location indicators for error feedback
|
high-prio usability
|
The square around critical locations should always appear in an appropriate size.
Currently it is fixed pixel counts but it could be derived in proportion to of the `viewBox` width.
|
True
|
larger location indicators for error feedback - The square around critical locations should always appear in an appropriate size.
Currently it is fixed pixel counts but it could be derived in proportion to of the `viewBox` width.
|
non_defect
|
larger location indicators for error feedback the square around critical locations should always appear in an appropriate size currently it is fixed pixel counts but it could be derived in proportion to of the viewbox width
| 0
|
6,735
| 2,610,274,441
|
IssuesEvent
|
2015-02-26 19:27:50
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Cannot configure BLOGGER account.....
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
I get this error message when attempting to configure a BLOGGER account, just
after entering blog link like "name.blogspot.com":
Well, this is embarrassing...
While trying to determine your blog's settings, ScribeFire tripped over this
code: UNKNOWN_BLOG_TYPE
You're welcome to try and configure your blog manually.
What browser are you using?
Google CHROME 9.0.597.98
What version of ScribeFire are you running?
ScribeFire - Versão: 1.4.3.0
```
-----
Original issue reported on code.google.com by `constant...@gmail.com` on 26 Feb 2011 at 12:04
* Merged into: #160
|
1.0
|
Cannot configure BLOGGER account..... - ```
What's the problem?
I get this error message when attempting to configure a BLOGGER account, just
after entering blog link like "name.blogspot.com":
Well, this is embarrassing...
While trying to determine your blog's settings, ScribeFire tripped over this
code: UNKNOWN_BLOG_TYPE
You're welcome to try and configure your blog manually.
What browser are you using?
Google CHROME 9.0.597.98
What version of ScribeFire are you running?
ScribeFire - Versão: 1.4.3.0
```
-----
Original issue reported on code.google.com by `constant...@gmail.com` on 26 Feb 2011 at 12:04
* Merged into: #160
|
defect
|
cannot configure blogger account what s the problem i get this error message when attempting to configure a blogger account just after entering blog link like name blogspot com well this is embarrassing while trying to determine your blog s settings scribefire tripped over this code unknown blog type you re welcome to try and configure your blog manually what browser are you using google chrome what version of scribefire are you running scribefire versão original issue reported on code google com by constant gmail com on feb at merged into
| 1
|
10,864
| 12,841,917,259
|
IssuesEvent
|
2020-07-08 00:34:13
|
libass/libass
|
https://api.github.com/repos/libass/libass
|
closed
|
Lines with shadows don't rotate the same way as xy-vsfilter
|
compatibility
|
I thought about submitting this as a bug to xy-vsfilter because its behavior is completely ludicrously insane on this matter, but I don't think anyone else would agree to changing rendering behavior because _compatibility_. And also development on that front is dead or something?
Let's say we have a line like the following: `{\fnArial\fs50\an5\bord4\xshad100\frz0}Testing`, and we increment `\frz` by 15 degrees each frame.
With libass, the above looks like the following:

Ignoring the weird, glitchy shadow rendering, which is a different issue, this is how I'd expect it to behave.
But, expected behavior isn't good enough for xy-vsfilter. It has to do things in the stupidest way possible:

It seems that xy-vsfilter is doing the following:
1. Take the origin of rotation distance to be that of the distance between the shadow's position (at the line's current alignment), and the origin (by default, the line's position).
2. Rotate the shadow using this origin distance.
3. Calculate the position where the corresponding run should be relative to the shadow's position.
4. Draw the run and the shadow.
As proof of this, in xy-vsfilter, the line `{\an5\shad50\org(370,230)\pos(320,180)\frz0\p1}m 0 0 l 100 0 100 100 0 100` will rotate in place. As will `{\an2\shad50\org(370,180)\pos(320,180)\frz0\p1}m 0 0 l 100 0 100 100 0 100`. Note the origin change with the alignment.
Ignoring my repetition that this behavior is downright fucking idiotic will get you results like this:

Now, I haven't checked to see if old vsfilter behaves this way, because I don't have a copy of it and I don't care to track one down. If it indeed exhibits this behavior, my suggestion is that everyone say "what the fuck, vsfilter" and continue on their merry way.
edit: it has been reported that old vsfilter does exhibit this behavior. Fabulous.
|
True
|
Lines with shadows don't rotate the same way as xy-vsfilter - I thought about submitting this as a bug to xy-vsfilter because its behavior is completely ludicrously insane on this matter, but I don't think anyone else would agree to changing rendering behavior because _compatibility_. And also development on that front is dead or something?
Let's say we have a line like the following: `{\fnArial\fs50\an5\bord4\xshad100\frz0}Testing`, and we increment `\frz` by 15 degrees each frame.
With libass, the above looks like the following:

Ignoring the weird, glitchy shadow rendering, which is a different issue, this is how I'd expect it to behave.
But, expected behavior isn't good enough for xy-vsfilter. It has to do things in the stupidest way possible:

It seems that xy-vsfilter is doing the following:
1. Take the origin of rotation distance to be that of the distance between the shadow's position (at the line's current alignment), and the origin (by default, the line's position).
2. Rotate the shadow using this origin distance.
3. Calculate the position where the corresponding run should be relative to the shadow's position.
4. Draw the run and the shadow.
As proof of this, in xy-vsfilter, the line `{\an5\shad50\org(370,230)\pos(320,180)\frz0\p1}m 0 0 l 100 0 100 100 0 100` will rotate in place. As will `{\an2\shad50\org(370,180)\pos(320,180)\frz0\p1}m 0 0 l 100 0 100 100 0 100`. Note the origin change with the alignment.
Ignoring my repetition that this behavior is downright fucking idiotic will get you results like this:

Now, I haven't checked to see if old vsfilter behaves this way, because I don't have a copy of it and I don't care to track one down. If it indeed exhibits this behavior, my suggestion is that everyone say "what the fuck, vsfilter" and continue on their merry way.
edit: it has been reported that old vsfilter does exhibit this behavior. Fabulous.
|
non_defect
|
lines with shadows don t rotate the same way as xy vsfilter i thought about submitting this as a bug to xy vsfilter because its behavior is completely ludicrously insane on this matter but i don t think anyone else would agree to changing rendering behavior because compatibility and also development on that front is dead or something let s say we have a line like the following fnarial testing and we increment frz by degrees each frame with libass the above looks like the following ignoring the weird glitchy shadow rendering which is a different issue this is how i d expect it to behave but expected behavior isn t good enough for xy vsfilter it has to do things in the stupidest way possible it seems that xy vsfilter is doing the following take the origin of rotation distance to be that of the distance between the shadow s position at the line s current alignment and the origin by default the line s position rotate the shadow using this origin distance calculate the position where the corresponding run should be relative to the shadow s position draw the run and the shadow as proof of this in xy vsfilter the line org pos m l will rotate in place as will org pos m l note the origin change with the alignment ignoring my repetition that this behavior is downright fucking idiotic will get you results like this now i haven t checked to see if old vsfilter behaves this way because i don t have a copy of it and i don t care to track one down if it indeed exhibits this behavior my suggestion is that everyone say what the fuck vsfilter and continue on their merry way edit it has been reported that old vsfilter does exhibit this behavior fabulous
| 0
|
1,628
| 3,814,655,965
|
IssuesEvent
|
2016-03-28 14:30:42
|
ga4gh/dockstore
|
https://api.github.com/repos/ga4gh/dockstore
|
closed
|
Experiment with and store etag to make refresh more efficient
|
enhancement web service
|
The GitHub api appears to have a way of retrieving whether a file has changed or not without using up a rate limit. https://developer.github.com/v3/#conditional-requests
Use this to make refresh more efficient (skip updating files with no updates) and use less rate limit.
|
1.0
|
Experiment with and store etag to make refresh more efficient - The GitHub api appears to have a way of retrieving whether a file has changed or not without using up a rate limit. https://developer.github.com/v3/#conditional-requests
Use this to make refresh more efficient (skip updating files with no updates) and use less rate limit.
|
non_defect
|
experiment with and store etag to make refresh more efficient the github api appears to have a way of retrieving whether a file has changed or not without using up a rate limit use this to make refresh more efficient skip updating files with no updates and use less rate limit
| 0
|
284,016
| 8,729,516,891
|
IssuesEvent
|
2018-12-10 20:30:35
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
opened
|
[studio] Don't fail if DB initialization fails to connect
|
enhancement priority: low
|
Currently, if the DB init script fails to connect to the database, Studio won't come up. Let's change that and assume that perhaps the DB is ready for us and the failure to connect/init isn't indicative of total DB failure.
The situation comes up when there is a cluster and one of the nodes already initialized the DB or when connecting to DB as a service where DB init is not necessary.
|
1.0
|
[studio] Don't fail if DB initialization fails to connect - Currently, if the DB init script fails to connect to the database, Studio won't come up. Let's change that and assume that perhaps the DB is ready for us and the failure to connect/init isn't indicative of total DB failure.
The situation comes up when there is a cluster and one of the nodes already initialized the DB or when connecting to DB as a service where DB init is not necessary.
|
non_defect
|
don t fail if db initialization fails to connect currently if the db init script fails to connect to the database studio won t come up let s change that and assume that perhaps the db is ready for us and the failure to connect init isn t indicative of total db failure the situation comes up when there is a cluster and one of the nodes already initialized the db or when connecting to db as a service where db init is not necessary
| 0
|
182,098
| 14,102,772,320
|
IssuesEvent
|
2020-11-06 09:16:33
|
ytorg/Yotter
|
https://api.github.com/repos/ytorg/Yotter
|
closed
|
Migrate to youtube-dlc
|
please test youtube
|
If you aren't aware, youtube-dl was hit with a DMCA takedown request at https://github.com/github/dmca/blob/master/2020/10/2020-10-23-RIAA.md
youtube-dlc is a fork of youtube-dl which is updated a lot more frequently and has a lot of features that youtube-dl doesn't have (for example, geo-blocking bypassing)
Repo Links:
https://pypi.org/project/youtube-dlc/
https://github.com/blackjack4494/yt-dlc
|
1.0
|
Migrate to youtube-dlc - If you aren't aware, youtube-dl was hit with a DMCA takedown request at https://github.com/github/dmca/blob/master/2020/10/2020-10-23-RIAA.md
youtube-dlc is a fork of youtube-dl which is updated a lot more frequently and has a lot of features that youtube-dl doesn't have (for example, geo-blocking bypassing)
Repo Links:
https://pypi.org/project/youtube-dlc/
https://github.com/blackjack4494/yt-dlc
|
non_defect
|
migrate to youtube dlc if you aren t aware youtube dl was hit with a dmca takedown request at youtube dlc is a fork of youtube dl which is updated a lot more frequently and has a lot of features that youtube dl doesn t have for example geo blocking bypassing repo links
| 0
|
26,278
| 4,650,470,522
|
IssuesEvent
|
2016-10-03 04:37:43
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
_kmeans chokes on large thresholds (Trac #1247)
|
defect Migrated from Trac prio-normal scipy.cluster
|
_Original ticket http://projects.scipy.org/scipy/ticket/1247 on 2010-07-24 by @kwgoodman, assigned to unknown._
_kmeans chokes on large thresholds:
>> from scipy import cluster
>> v = np.array([1,2,3,4,10], dtype=float)
>> cluster.vq.kmeans(v, 1, thresh=1e15)
(array([ 4.]), 2.3999999999999999)
>> cluster.vq.kmeans(v, 1, thresh=1e16)
<snip>
IndexError: list index out of range
The problem is in these lines:
diff = thresh+1.
while diff > thresh:
<snip>
if(diff > thresh):
If thresh is large then (thresh + 1) > thresh is False:
>> thresh = 1e16
>> diff = thresh + 1.0
>> diff > thresh
False
What's a use case for a large threshold? You might want to study the algorithm by seeing the result after one iteration (not to be confused with the iter input which is something else).
One fix is to use 2*thresh instead for thresh + 1. But that just pushes the problem out to higher thresholds:
>> thresh = 1e16
>> diff = 2 * thresh
>> diff > thresh
True
>> thresh = 1e400
>> diff = 2 * thresh
>> diff > thresh
False
A better fix is to replace:
if dist > thresh
with
if (dist > thresh) or (count = 0)
or
if (dist > thresh) or firstflag
|
1.0
|
_kmeans chokes on large thresholds (Trac #1247) - _Original ticket http://projects.scipy.org/scipy/ticket/1247 on 2010-07-24 by @kwgoodman, assigned to unknown._
_kmeans chokes on large thresholds:
>> from scipy import cluster
>> v = np.array([1,2,3,4,10], dtype=float)
>> cluster.vq.kmeans(v, 1, thresh=1e15)
(array([ 4.]), 2.3999999999999999)
>> cluster.vq.kmeans(v, 1, thresh=1e16)
<snip>
IndexError: list index out of range
The problem is in these lines:
diff = thresh+1.
while diff > thresh:
<snip>
if(diff > thresh):
If thresh is large then (thresh + 1) > thresh is False:
>> thresh = 1e16
>> diff = thresh + 1.0
>> diff > thresh
False
What's a use case for a large threshold? You might want to study the algorithm by seeing the result after one iteration (not to be confused with the iter input which is something else).
One fix is to use 2*thresh instead for thresh + 1. But that just pushes the problem out to higher thresholds:
>> thresh = 1e16
>> diff = 2 * thresh
>> diff > thresh
True
>> thresh = 1e400
>> diff = 2 * thresh
>> diff > thresh
False
A better fix is to replace:
if dist > thresh
with
if (dist > thresh) or (count = 0)
or
if (dist > thresh) or firstflag
|
defect
|
kmeans chokes on large thresholds trac original ticket on by kwgoodman assigned to unknown kmeans chokes on large thresholds from scipy import cluster v np array dtype float cluster vq kmeans v thresh array cluster vq kmeans v thresh indexerror list index out of range the problem is in these lines diff thresh while diff thresh if diff thresh if thresh is large then thresh thresh is false thresh diff thresh diff thresh false what s a use case for a large threshold you might want to study the algorithm by seeing the result after one iteration not to be confused with the iter input which is something else one fix is to use thresh instead for thresh but that just pushes the problem out to higher thresholds thresh diff thresh diff thresh true thresh diff thresh diff thresh false a better fix is to replace if dist thresh with if dist thresh or count or if dist thresh or firstflag
| 1
|
64,579
| 18,754,448,601
|
IssuesEvent
|
2021-11-05 08:54:58
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: Gekodriver no longer works since 4.0.0 RC1
|
I-defect needs-triaging
|
### What happened?
Environment:
Centos 7
Python 3.8
Firefox 91.2
geckodriver 0.30
What happens?
Firefox will refuse to work after RC2 and above, says "No matching capabilities found'".
Error:
`CEx:SessionNotCreatedException('session not created: No matching capabilities found', None, ['#0 0x5602f0959f93 <unknown>', '#1 0x5602f0434908 <unknown>', '#2 0x5602f048cadc <unknown>', '#3 0x5602f048bee2 <unknown>', '#4 0x5602f048d56d <unknown>', '#5 0x5602f0487973 <unknown>', '#6 0x5602f045ddf4 <unknown>', '#7 0x5602f045ede5 <unknown>', '#8 0x5602f09892be <unknown>', '#9 0x5602f099eba0 <unknown>', '#10 0x5602f098a215 <unknown>', '#11 0x5602f099ffe8 <unknown>', '#12 0x5602f097e9db <unknown>', '#13 0x5602f09bb218 <unknown>', '#14 0x5602f09bb398 <unknown>', '#15 0x5602f09d66cd <unknown>', '#16 0x7f201f087ea5 <unknown>', ''])
`
Code used:
`options = FirefoxOptions()
try:
driver = webdriver.Firefox(options=options,log_path=TMP_DIR_LOG+'ff_webdriver-%s.log' % randomString(4))
time.sleep(1)
except TimeoutException:
print("Firefox took more than 60 seconds start!")
print("FFEx:"+repr(e))
except Exception as e:
print("FFEx:"+repr(e))`
### How can we reproduce the issue?
```shell
Try the simple cod above under Selenium 4.0.0 RC2 and above with Python 3.8
The above code works fine with Selenium 4.0.0. RC1
```
### Relevant log output
```shell
CEx:SessionNotCreatedException('session not created: No matching capabilities found', None, ['#0 0x5602f0959f93 <unknown>', '#1 0x5602f0434908 <unknown>', '#2 0x5602f048cadc <unknown>', '#3 0x5602f048bee2 <unknown>', '#4 0x5602f048d56d <unknown>', '#5 0x5602f0487973 <unknown>', '#6 0x5602f045ddf4 <unknown>', '#7 0x5602f045ede5 <unknown>', '#8 0x5602f09892be <unknown>', '#9 0x5602f099eba0 <unknown>', '#10 0x5602f098a215 <unknown>', '#11 0x5602f099ffe8 <unknown>', '#12 0x5602f097e9db <unknown>', '#13 0x5602f09bb218 <unknown>', '#14 0x5602f09bb398 <unknown>', '#15 0x5602f09d66cd <unknown>', '#16 0x7f201f087ea5 <unknown>', ''])
Unable to start driver UnboundLocalError("local variable 'driver' referenced before assignment").
```
### Operating System
Centos 7
### Selenium version
Python 4.0.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 91
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 0.30
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: Gekodriver no longer works since 4.0.0 RC1 - ### What happened?
Environment:
Centos 7
Python 3.8
Firefox 91.2
geckodriver 0.30
What happens?
Firefox will refuse to work after RC2 and above, says "No matching capabilities found'".
Error:
`CEx:SessionNotCreatedException('session not created: No matching capabilities found', None, ['#0 0x5602f0959f93 <unknown>', '#1 0x5602f0434908 <unknown>', '#2 0x5602f048cadc <unknown>', '#3 0x5602f048bee2 <unknown>', '#4 0x5602f048d56d <unknown>', '#5 0x5602f0487973 <unknown>', '#6 0x5602f045ddf4 <unknown>', '#7 0x5602f045ede5 <unknown>', '#8 0x5602f09892be <unknown>', '#9 0x5602f099eba0 <unknown>', '#10 0x5602f098a215 <unknown>', '#11 0x5602f099ffe8 <unknown>', '#12 0x5602f097e9db <unknown>', '#13 0x5602f09bb218 <unknown>', '#14 0x5602f09bb398 <unknown>', '#15 0x5602f09d66cd <unknown>', '#16 0x7f201f087ea5 <unknown>', ''])
`
Code used:
`options = FirefoxOptions()
try:
driver = webdriver.Firefox(options=options,log_path=TMP_DIR_LOG+'ff_webdriver-%s.log' % randomString(4))
time.sleep(1)
except TimeoutException:
print("Firefox took more than 60 seconds start!")
print("FFEx:"+repr(e))
except Exception as e:
print("FFEx:"+repr(e))`
### How can we reproduce the issue?
```shell
Try the simple cod above under Selenium 4.0.0 RC2 and above with Python 3.8
The above code works fine with Selenium 4.0.0. RC1
```
### Relevant log output
```shell
CEx:SessionNotCreatedException('session not created: No matching capabilities found', None, ['#0 0x5602f0959f93 <unknown>', '#1 0x5602f0434908 <unknown>', '#2 0x5602f048cadc <unknown>', '#3 0x5602f048bee2 <unknown>', '#4 0x5602f048d56d <unknown>', '#5 0x5602f0487973 <unknown>', '#6 0x5602f045ddf4 <unknown>', '#7 0x5602f045ede5 <unknown>', '#8 0x5602f09892be <unknown>', '#9 0x5602f099eba0 <unknown>', '#10 0x5602f098a215 <unknown>', '#11 0x5602f099ffe8 <unknown>', '#12 0x5602f097e9db <unknown>', '#13 0x5602f09bb218 <unknown>', '#14 0x5602f09bb398 <unknown>', '#15 0x5602f09d66cd <unknown>', '#16 0x7f201f087ea5 <unknown>', ''])
Unable to start driver UnboundLocalError("local variable 'driver' referenced before assignment").
```
### Operating System
Centos 7
### Selenium version
Python 4.0.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 91
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 0.30
### Are you using Selenium Grid?
_No response_
|
defect
|
gekodriver no longer works since what happened environment centos python firefox geckodriver what happens firefox will refuse to work after and above says no matching capabilities found error cex sessionnotcreatedexception session not created no matching capabilities found none code used options firefoxoptions try driver webdriver firefox options options log path tmp dir log ff webdriver s log randomstring time sleep except timeoutexception print firefox took more than seconds start print ffex repr e except exception as e print ffex repr e how can we reproduce the issue shell try the simple cod above under selenium and above with python the above code works fine with selenium relevant log output shell cex sessionnotcreatedexception session not created no matching capabilities found none unable to start driver unboundlocalerror local variable driver referenced before assignment operating system centos selenium version python what are the browser s and version s where you see this issue firefox what are the browser driver s and version s where you see this issue geckodriver are you using selenium grid no response
| 1
|
366,379
| 10,819,922,915
|
IssuesEvent
|
2019-11-08 15:21:39
|
Santa-Polytecha/playroom-web
|
https://api.github.com/repos/Santa-Polytecha/playroom-web
|
closed
|
🔀 Add Vuejs router
|
Priority: Medium Status: Completed Type: Enhancement invalid
|
A router is necessary to navigate in the web app. For now, everything is under the `<App/>` component, and the navigation is counter-intuitive.
|
1.0
|
🔀 Add Vuejs router - A router is necessary to navigate in the web app. For now, everything is under the `<App/>` component, and the navigation is counter-intuitive.
|
non_defect
|
🔀 add vuejs router a router is necessary to navigate in the web app for now everything is under the component and the navigation is counter intuitive
| 0
|
66,921
| 20,763,797,766
|
IssuesEvent
|
2022-03-15 18:36:59
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
TreeTable: JS error `orginTableContent` not found
|
defect
|
**Environment:**
- PF Version: _11.0.0_
- PF Theme: _ALL_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**To Reproduce**
Reproducer with Sticky Header:
[pf-4393.zip](https://github.com/primefaces/primefaces/files/8256060/pf-4393.zip)
"Uncaught ReferenceError: orginTableContent is not defined"

|
1.0
|
TreeTable: JS error `orginTableContent` not found - **Environment:**
- PF Version: _11.0.0_
- PF Theme: _ALL_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**To Reproduce**
Reproducer with Sticky Header:
[pf-4393.zip](https://github.com/primefaces/primefaces/files/8256060/pf-4393.zip)
"Uncaught ReferenceError: orginTableContent is not defined"

|
defect
|
treetable js error orgintablecontent not found environment pf version pf theme all jsf version all affected browsers all to reproduce reproducer with sticky header uncaught referenceerror orgintablecontent is not defined
| 1
|
64,274
| 18,355,037,743
|
IssuesEvent
|
2021-10-08 16:56:05
|
matrix-org/sydent
|
https://api.github.com/repos/matrix-org/sydent
|
closed
|
Sending text messages (SMS) is silently failing on vector.im
|
T-Defect S-Major
|
All look good in the logs:
```
Oct 5 10:14:15 corus sydent-vectoris[16661]: 2021-10-05 10:14:15,659 - sydent.validators.msisdnvalidator - 126 - INFO - Attempting to text code [...] to 44[my phone number] (country 44) with originator {'type': 'alpha', 'text': 'Element'}
Oct 5 10:14:15 corus sydent-vectoris[16661]: 2021-10-05 10:14:15,659 - twisted - 147 - INFO - "::ffff:10.101.0.14" - - [05/Oct/2021:10:14:14 +0000] "POST /_matrix/identity/api/v1/validate/msisdn/requestToken HTTP/1.1" 200 94 "-" "Synapse/1.44.0rc2 (b=matrix-org-hotfixes,ebbd37b66)"
```
But no text message is ever received.
Not sure whether this is due to #410 or if there's an issue with the OpenMarket API causing it to silently fail sending the text message.
A good first step would be fixing #410 so we can tell if the OM API responds with an error.
|
1.0
|
Sending text messages (SMS) is silently failing on vector.im - All look good in the logs:
```
Oct 5 10:14:15 corus sydent-vectoris[16661]: 2021-10-05 10:14:15,659 - sydent.validators.msisdnvalidator - 126 - INFO - Attempting to text code [...] to 44[my phone number] (country 44) with originator {'type': 'alpha', 'text': 'Element'}
Oct 5 10:14:15 corus sydent-vectoris[16661]: 2021-10-05 10:14:15,659 - twisted - 147 - INFO - "::ffff:10.101.0.14" - - [05/Oct/2021:10:14:14 +0000] "POST /_matrix/identity/api/v1/validate/msisdn/requestToken HTTP/1.1" 200 94 "-" "Synapse/1.44.0rc2 (b=matrix-org-hotfixes,ebbd37b66)"
```
But no text message is ever received.
Not sure whether this is due to #410 or if there's an issue with the OpenMarket API causing it to silently fail sending the text message.
A good first step would be fixing #410 so we can tell if the OM API responds with an error.
|
defect
|
sending text messages sms is silently failing on vector im all look good in the logs oct corus sydent vectoris sydent validators msisdnvalidator info attempting to text code to country with originator type alpha text element oct corus sydent vectoris twisted info ffff post matrix identity api validate msisdn requesttoken http synapse b matrix org hotfixes but no text message is ever received not sure whether this is due to or if there s an issue with the openmarket api causing it to silently fail sending the text message a good first step would be fixing so we can tell if the om api responds with an error
| 1
|
29,287
| 5,635,881,030
|
IssuesEvent
|
2017-04-06 02:46:53
|
Right2Drive/ease-web
|
https://api.github.com/repos/Right2Drive/ease-web
|
closed
|
Future Security Issue
|
Defect
|
Currently serving entire file system for ease-web to clients for the sake of webpack dev tools. Should resolve this later to only serve bundle and index (as before) when in prod mode, and root when in dev mode
|
1.0
|
Future Security Issue - Currently serving entire file system for ease-web to clients for the sake of webpack dev tools. Should resolve this later to only serve bundle and index (as before) when in prod mode, and root when in dev mode
|
defect
|
future security issue currently serving entire file system for ease web to clients for the sake of webpack dev tools should resolve this later to only serve bundle and index as before when in prod mode and root when in dev mode
| 1
|
24,023
| 3,899,977,452
|
IssuesEvent
|
2016-04-18 01:42:09
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
closed
|
GK not avaliable when log a cache
|
Priority_High Type_Defect
|
When logging a cache as found and pressing the button "Log GeoKrety now" we see the message that there can not connected to geokrety.org.
When running the crontab command "wget -O - -q http://www.opencaching.nl/util.sec/geokrety/geokrety.new.php" it works fine. The website geokrety.org is also online.
Could it be a bug in the code that cannot connect to GK?

|
1.0
|
GK not avaliable when log a cache - When logging a cache as found and pressing the button "Log GeoKrety now" we see the message that there can not connected to geokrety.org.
When running the crontab command "wget -O - -q http://www.opencaching.nl/util.sec/geokrety/geokrety.new.php" it works fine. The website geokrety.org is also online.
Could it be a bug in the code that cannot connect to GK?

|
defect
|
gk not avaliable when log a cache when logging a cache as found and pressing the button log geokrety now we see the message that there can not connected to geokrety org when running the crontab command wget o q it works fine the website geokrety org is also online could it be a bug in the code that cannot connect to gk
| 1
|
39,468
| 9,477,762,244
|
IssuesEvent
|
2019-04-19 19:51:16
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
opened
|
Problem with loading block restricted elemental variables from exodus file
|
T: defect
|
## Bug Description
Problem with loading block restricted elemental variables from exodus file.
A mesh with two blocks is created in file `generate.i`, block ids are 1 and 800. The constant monomial variable is created on block 800 and set to say 4. It is written to an exodus file.
Another input file `load.i` loads the exodus file and uses `initial_from_file_var = var_name` to load the variable; the variable is also defined on block 800 only. The result is that the variable is not set at all and 0 in block 800.
If you turn around and define the variable on block 1 in both inputs `generate.i` and `load.i`,
then the variable is set properly.
## Steps to Reproduce
Will be provided in associated PR.
## Impact
Yes.
|
1.0
|
Problem with loading block restricted elemental variables from exodus file - ## Bug Description
Problem with loading block restricted elemental variables from exodus file.
A mesh with two blocks is created in file `generate.i`, block ids are 1 and 800. The constant monomial variable is created on block 800 and set to say 4. It is written to an exodus file.
Another input file `load.i` loads the exodus file and uses `initial_from_file_var = var_name` to load the variable; the variable is also defined on block 800 only. The result is that the variable is not set at all and 0 in block 800.
If you turn around and define the variable on block 1 in both inputs `generate.i` and `load.i`,
then the variable is set properly.
## Steps to Reproduce
Will be provided in associated PR.
## Impact
Yes.
|
defect
|
problem with loading block restricted elemental variables from exodus file bug description problem with loading block restricted elemental variables from exodus file a mesh with two blocks is created in file generate i block ids are and the constant monomial variable is created on block and set to say it is written to an exodus file another input file load i loads the exodus file and uses initial from file var var name to load the variable the variable is also defined on block only the result is that the variable is not set at all and in block if you turn around and define the variable on block in both inputs generate i and load i then the variable is set properly steps to reproduce will be provided in associated pr impact yes
| 1
|
15,125
| 2,849,462,840
|
IssuesEvent
|
2015-05-30 18:20:51
|
khval/mplayer-amigaos
|
https://api.github.com/repos/khval/mplayer-amigaos
|
closed
|
Window can be resized beyond minimum size, system freezes
|
auto-migrated Priority-Low Type-Defect
|
```
MPlayer Summer Reloaded version, 22 Aug 09
AmigaOS 4.1 update 2, AmigaOne XE 1 GHz, 1GB RAM
While a video is playing, resize its window to the smallest size possible. The
window will be resized beyond the minimum size allowed by a GUI, (i.e. the
gadgets must be still shown). A system freeze occurs.
```
Original issue reported on code.google.com by `varthal...@gmail.com` on 28 Nov 2010 at 9:52
|
1.0
|
Window can be resized beyond minimum size, system freezes - ```
MPlayer Summer Reloaded version, 22 Aug 09
AmigaOS 4.1 update 2, AmigaOne XE 1 GHz, 1GB RAM
While a video is playing, resize its window to the smallest size possible. The
window will be resized beyond the minimum size allowed by a GUI, (i.e. the
gadgets must be still shown). A system freeze occurs.
```
Original issue reported on code.google.com by `varthal...@gmail.com` on 28 Nov 2010 at 9:52
|
defect
|
window can be resized beyond minimum size system freezes mplayer summer reloaded version aug amigaos update amigaone xe ghz ram while a video is playing resize its window to the smallest size possible the window will be resized beyond the minimum size allowed by a gui i e the gadgets must be still shown a system freeze occurs original issue reported on code google com by varthal gmail com on nov at
| 1
|
8,786
| 27,172,247,914
|
IssuesEvent
|
2023-02-17 20:35:40
|
OneDrive/onedrive-api-docs
|
https://api.github.com/repos/OneDrive/onedrive-api-docs
|
closed
|
OneDrive personal: files filter name with startsWith returns odata.nextLink for single result
|
area:OneDrive Personal Needs: Attention :wave: automation:Closed
|
I have replaced actual drive id and folder id with placeholders. You should be able to see the actual
values if you look at the requests (I have put in request ids wherever applicable)
1. GET https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name,'402048677745')
request-id: d8cd752a-9cb6-4fad-a00d-3532ded47f49
2. Actual Result:
{
"@odata.context": "https://graph.microsoft.com/beta/$metadata#drives('<drive-id>')/items('<folder-id>')/children",
"@odata.count": 245,
"@odata.nextLink": "https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name%2c%27402048677745%27)&$skiptoken=MjAx",
"value": []
}
3. When i follow @odata.nextLink I get the expected result
`GET https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name,'402048677745')&$skiptoken=MjAx`
request-id: 46571704-8922-4fcc-88c6-ef6709f6792f
{
"@odata.context": "https://graph.microsoft.com/beta/$metadata#drives('<drive-id>')/items('<folder-id>')/children",
"@odata.count": 245,
"value": [
{
"@microsoft.graph.downloadUrl": "<url>",
"createdDateTime": "2020-01-29T01:03:46.967Z",
<some properties>
"id": "<id>",
"lastModifiedDateTime": "2020-01-29T01:26:02.827Z",
"name": "402048677745_orig.jpg",
"webUrl": "<1drv.ms url>",
"size": 252817,
<bunch of other properties>
}
]
}
**Expected result:**
@odata.count is wrong. Should be 1
Since filter actually returns a single item, the result should have value and no @odata.nextLink
This may be a recent regression, was working last week (I did have less than 200 items in the folder then). I use startsWith filter since equal doesn't seem to work.
|
1.0
|
OneDrive personal: files filter name with startsWith returns odata.nextLink for single result - I have replaced actual drive id and folder id with placeholders. You should be able to see the actual
values if you look at the requests (I have put in request ids wherever applicable)
1. GET https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name,'402048677745')
request-id: d8cd752a-9cb6-4fad-a00d-3532ded47f49
2. Actual Result:
{
"@odata.context": "https://graph.microsoft.com/beta/$metadata#drives('<drive-id>')/items('<folder-id>')/children",
"@odata.count": 245,
"@odata.nextLink": "https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name%2c%27402048677745%27)&$skiptoken=MjAx",
"value": []
}
3. When i follow @odata.nextLink I get the expected result
`GET https://graph.microsoft.com/beta/drives/<drive-id>/items/<folder-id>/children?$filter=startswith(name,'402048677745')&$skiptoken=MjAx`
request-id: 46571704-8922-4fcc-88c6-ef6709f6792f
{
"@odata.context": "https://graph.microsoft.com/beta/$metadata#drives('<drive-id>')/items('<folder-id>')/children",
"@odata.count": 245,
"value": [
{
"@microsoft.graph.downloadUrl": "<url>",
"createdDateTime": "2020-01-29T01:03:46.967Z",
<some properties>
"id": "<id>",
"lastModifiedDateTime": "2020-01-29T01:26:02.827Z",
"name": "402048677745_orig.jpg",
"webUrl": "<1drv.ms url>",
"size": 252817,
<bunch of other properties>
}
]
}
**Expected result:**
@odata.count is wrong. Should be 1
Since filter actually returns a single item, the result should have value and no @odata.nextLink
This may be a recent regression, was working last week (I did have less than 200 items in the folder then). I use startsWith filter since equal doesn't seem to work.
|
non_defect
|
onedrive personal files filter name with startswith returns odata nextlink for single result i have replaced actual drive id and folder id with placeholders you should be able to see the actual values if you look at the requests i have put in request ids wherever applicable get request id actual result odata context odata count odata nextlink value when i follow odata nextlink i get the expected result get request id odata context odata count value microsoft graph downloadurl createddatetime id lastmodifieddatetime name orig jpg weburl size expected result odata count is wrong should be since filter actually returns a single item the result should have value and no odata nextlink this may be a recent regression was working last week i did have less than items in the folder then i use startswith filter since equal doesn t seem to work
| 0
|
78,805
| 27,766,743,404
|
IssuesEvent
|
2023-03-16 11:59:18
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Start automatically after system login not working.
|
T-Defect
|
### Steps to reproduce
It no longer working as automatically system login. I just installed the new version in windows 11 and this option is not working.
### Outcome
#### What did you expect?
I expect it will work automatically after system login
#### What happened instead?
It does not start automatically after system login in new version
### Operating system
Windows 11
### Application version
Element version: 1.11.25 Olm version: 3.2.12
### How did you install the app?
https://element.io/download
### Homeserver
_No response_
### Will you send logs?
Yes
|
1.0
|
Start automatically after system login not working. - ### Steps to reproduce
It no longer working as automatically system login. I just installed the new version in windows 11 and this option is not working.
### Outcome
#### What did you expect?
I expect it will work automatically after system login
#### What happened instead?
It does not start automatically after system login in new version
### Operating system
Windows 11
### Application version
Element version: 1.11.25 Olm version: 3.2.12
### How did you install the app?
https://element.io/download
### Homeserver
_No response_
### Will you send logs?
Yes
|
defect
|
start automatically after system login not working steps to reproduce it no longer working as automatically system login i just installed the new version in windows and this option is not working outcome what did you expect i expect it will work automatically after system login what happened instead it does not start automatically after system login in new version operating system windows application version element version olm version how did you install the app homeserver no response will you send logs yes
| 1
|
67,889
| 21,263,568,442
|
IssuesEvent
|
2022-04-13 07:45:36
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Room leave progress indicator shouldn't be modal
|
T-Defect
|
### Steps to reproduce
1. Leave a room.
### Outcome
#### What did you expect?
The room to disappear from my list, and to be able to continue using the app as normal.
#### What happened instead?
A modal spinner appeared, blocking any further interaction until the leave request was complete(?).
### Operating system
Linux
### Browser information
Firefox 98.0.2
### URL for webapp
app.element.io
### Application version
Element version: 1.10.9 Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Room leave progress indicator shouldn't be modal - ### Steps to reproduce
1. Leave a room.
### Outcome
#### What did you expect?
The room to disappear from my list, and to be able to continue using the app as normal.
#### What happened instead?
A modal spinner appeared, blocking any further interaction until the leave request was complete(?).
### Operating system
Linux
### Browser information
Firefox 98.0.2
### URL for webapp
app.element.io
### Application version
Element version: 1.10.9 Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
room leave progress indicator shouldn t be modal steps to reproduce leave a room outcome what did you expect the room to disappear from my list and to be able to continue using the app as normal what happened instead a modal spinner appeared blocking any further interaction until the leave request was complete operating system linux browser information firefox url for webapp app element io application version element version olm version homeserver matrix org will you send logs no
| 1
|
79,394
| 28,146,617,477
|
IssuesEvent
|
2023-04-02 15:01:21
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
closed
|
Relation member not highlighted
|
Defect Minor
|
If a multipolygon building is included in the associatedStreet relation, then when this relation is selected, the building is not highlighted in green.
## Vespucci Version
18.1.4.0.
## Download source
google play store
## Device (Manufacturer and Model)
Redmi Note 10 Pro
## Android Version
12
## How to recreate
Select the associatedStreet relation that has a multipolygon building
|
1.0
|
Relation member not highlighted - If a multipolygon building is included in the associatedStreet relation, then when this relation is selected, the building is not highlighted in green.
## Vespucci Version
18.1.4.0.
## Download source
google play store
## Device (Manufacturer and Model)
Redmi Note 10 Pro
## Android Version
12
## How to recreate
Select the associatedStreet relation that has a multipolygon building
|
defect
|
relation member not highlighted if a multipolygon building is included in the associatedstreet relation then when this relation is selected the building is not highlighted in green vespucci version download source google play store device manufacturer and model redmi note pro android version how to recreate select the associatedstreet relation that has a multipolygon building
| 1
|
37,540
| 8,425,211,067
|
IssuesEvent
|
2018-10-16 01:17:02
|
supertuxkart/stk-code
|
https://api.github.com/repos/supertuxkart/stk-code
|
reopened
|
Animations get reset at the wrong time
|
C:Graphics R:notourbug T: defect
|
I suspect that animations get reset at the wrong time.
You can see it for example in the zipper animation or in the water and lava animations in the overworld.
Video: https://flakebi.de/flakebi/AnimationReset.mp4
|
1.0
|
Animations get reset at the wrong time - I suspect that animations get reset at the wrong time.
You can see it for example in the zipper animation or in the water and lava animations in the overworld.
Video: https://flakebi.de/flakebi/AnimationReset.mp4
|
defect
|
animations get reset at the wrong time i suspect that animations get reset at the wrong time you can see it for example in the zipper animation or in the water and lava animations in the overworld video
| 1
|
104,343
| 4,209,891,778
|
IssuesEvent
|
2016-06-29 07:52:03
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
[FVT]Failed to run makedns in rh7.2 ppc64
|
component:dns priority:normal status:pending type:bug xCAT 2.12.1 Sprint 2
|
Below is the error message after run makedns:
```
Handling c910f02c08p21 iError: Failure encountered updating 10.IN-ADDR.ARPA., error was NOTAUTH. See more details in system log.
n /etc/hosts.
```
Rerun passed.
Test ENV:
```
[root@c910f02c01 result]# cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Red Hat Enterprise Linux Server release 7.2 (Maipo)
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@c910f02c01p result]# rpm -qa | grep -i xCAT
xCAT-genesis-scripts-ppc64-2.13-snap201605230646.noarch
perl-xCAT-2.13-snap201605230645.noarch
xCAT-server-2.13-snap201605230645.noarch
xCAT-genesis-base-ppc64-2.12-snap201605051443.noarch
conserver-xcat-8.1.16-10.ppc64
xCAT-client-2.13-snap201605230645.noarch
xCAT-2.13-snap201605230646.ppc64
ipmitool-xcat-1.8.15-1.ppc64
grub2-xcat-2.02-0.16.el7.snap201506090204.noarch
xCAT-buildkit-2.13-snap201605230646.noarch
xCAT-test-2.13-snap201605230646.noarch
```
|
1.0
|
[FVT]Failed to run makedns in rh7.2 ppc64 - Below is the error message after run makedns:
```
Handling c910f02c08p21 iError: Failure encountered updating 10.IN-ADDR.ARPA., error was NOTAUTH. See more details in system log.
n /etc/hosts.
```
Rerun passed.
Test ENV:
```
[root@c910f02c01 result]# cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Red Hat Enterprise Linux Server release 7.2 (Maipo)
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@c910f02c01p result]# rpm -qa | grep -i xCAT
xCAT-genesis-scripts-ppc64-2.13-snap201605230646.noarch
perl-xCAT-2.13-snap201605230645.noarch
xCAT-server-2.13-snap201605230645.noarch
xCAT-genesis-base-ppc64-2.12-snap201605051443.noarch
conserver-xcat-8.1.16-10.ppc64
xCAT-client-2.13-snap201605230645.noarch
xCAT-2.13-snap201605230646.ppc64
ipmitool-xcat-1.8.15-1.ppc64
grub2-xcat-2.02-0.16.el7.snap201506090204.noarch
xCAT-buildkit-2.13-snap201605230646.noarch
xCAT-test-2.13-snap201605230646.noarch
```
|
non_defect
|
failed to run makedns in below is the error message after run makedns handling ierror failure encountered updating in addr arpa error was notauth see more details in system log n etc hosts rerun passed test env cat etc release name red hat enterprise linux server version maipo id rhel id like fedora version id pretty name red hat enterprise linux server maipo ansi color cpe name cpe o redhat enterprise linux ga server home url bug report url redhat bugzilla product red hat enterprise linux redhat bugzilla product version redhat support product red hat enterprise linux redhat support product version red hat enterprise linux server release maipo red hat enterprise linux server release maipo rpm qa grep i xcat xcat genesis scripts noarch perl xcat noarch xcat server noarch xcat genesis base noarch conserver xcat xcat client noarch xcat ipmitool xcat xcat noarch xcat buildkit noarch xcat test noarch
| 0
|
640,876
| 20,811,101,334
|
IssuesEvent
|
2022-03-18 02:50:03
|
khalidsaadat/soen341
|
https://api.github.com/repos/khalidsaadat/soen341
|
opened
|
[USER STORY] As a normal user I can generate a new share link so that my family and friends can access the link to purchase the items.
|
High Priority user story 5 pts High Risk
|
### Acceptance Test:
1. The user has the ability to get a link that is shareable among their relatives.
2. Once accessed, the user's family and friends can see the items that are shown on the registries.
3. Once seen, the user's family and friends have the ability to purchase those items.
|
1.0
|
[USER STORY] As a normal user I can generate a new share link so that my family and friends can access the link to purchase the items. - ### Acceptance Test:
1. The user has the ability to get a link that is shareable among their relatives.
2. Once accessed, the user's family and friends can see the items that are shown on the registries.
3. Once seen, the user's family and friends have the ability to purchase those items.
|
non_defect
|
as a normal user i can generate a new share link so that my family and friends can access the link to purchase the items acceptance test the user has the ability to get a link that is shareable among their relatives once accessed the user s family and friends can see the items that are shown on the registries once seen the user s family and friends have the ability to purchase those items
| 0
|
379,462
| 11,222,135,547
|
IssuesEvent
|
2020-01-07 19:28:28
|
pulumi/pulumi-kubernetes
|
https://api.github.com/repos/pulumi/pulumi-kubernetes
|
closed
|
helm: apply namespace default transformation
|
area/helm kind/feature priority/P1
|
When I specify `namespace` on a chart via pulumi it seems this only populates the `Release.Namespace` template variable, whereas when running `helm install --namespace <namespace>` tiller seems to actively add an `metadata.namespace` entry before applying the templates to the cluster. Many charts unfortunately don’t have the `Release.Namespace` variable but instead rely on the aforementioned tiller transformation.
Pulumi should apply the same transformation as default, if the `namespace` attribute has been set on the component. This providers better UX, is closer to `helm install` and avoids a lot of surprises for new pulumi users.
The default may be as simple as
```ts
manifest => {
manifest.metadata.namespace = this.args.namespace;
}
```
Corresponding slack conversation: https://pulumi-community.slack.com/archives/C84L4E3N1/p1537573095000100
|
1.0
|
helm: apply namespace default transformation - When I specify `namespace` on a chart via pulumi it seems this only populates the `Release.Namespace` template variable, whereas when running `helm install --namespace <namespace>` tiller seems to actively add an `metadata.namespace` entry before applying the templates to the cluster. Many charts unfortunately don’t have the `Release.Namespace` variable but instead rely on the aforementioned tiller transformation.
Pulumi should apply the same transformation as default, if the `namespace` attribute has been set on the component. This providers better UX, is closer to `helm install` and avoids a lot of surprises for new pulumi users.
The default may be as simple as
```ts
manifest => {
manifest.metadata.namespace = this.args.namespace;
}
```
Corresponding slack conversation: https://pulumi-community.slack.com/archives/C84L4E3N1/p1537573095000100
|
non_defect
|
helm apply namespace default transformation when i specify namespace on a chart via pulumi it seems this only populates the release namespace template variable whereas when running helm install namespace tiller seems to actively add an metadata namespace entry before applying the templates to the cluster many charts unfortunately don’t have the release namespace variable but instead rely on the aforementioned tiller transformation pulumi should apply the same transformation as default if the namespace attribute has been set on the component this providers better ux is closer to helm install and avoids a lot of surprises for new pulumi users the default may be as simple as ts manifest manifest metadata namespace this args namespace corresponding slack conversation
| 0
|
147,722
| 13,213,205,920
|
IssuesEvent
|
2020-08-16 11:34:17
|
ant-design/ant-design
|
https://api.github.com/repos/ant-design/ant-design
|
closed
|
Document Update: formatting number for currencies and other thing in the NumberInput Component
|
Inactive help wanted 📝 Documentation
|
- [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Hey, I have created this example to figure out how to format the ant.design InputNumber component for currencies using the Intl.NumberFormat API.
You can check out my implementation [here](https://codesandbox.io/s/currency-wrapper-antd-input-3ynzo)
### What does the proposed API look like?
I am just asking for adding that to the docs as it would be helpful for other people. Localization on the number formatting is a pretty common usecase.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
1.0
|
Document Update: formatting number for currencies and other thing in the NumberInput Component - - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Hey, I have created this example to figure out how to format the ant.design InputNumber component for currencies using the Intl.NumberFormat API.
You can check out my implementation [here](https://codesandbox.io/s/currency-wrapper-antd-input-3ynzo)
### What does the proposed API look like?
I am just asking for adding that to the docs as it would be helpful for other people. Localization on the number formatting is a pretty common usecase.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
non_defect
|
document update formatting number for currencies and other thing in the numberinput component i have searched the of this repository and believe that this is not a duplicate what problem does this feature solve hey i have created this example to figure out how to format the ant design inputnumber component for currencies using the intl numberformat api you can check out my implementation what does the proposed api look like i am just asking for adding that to the docs as it would be helpful for other people localization on the number formatting is a pretty common usecase
| 0
|
55,822
| 13,686,548,352
|
IssuesEvent
|
2020-09-30 08:51:34
|
googleapis/java-spanner
|
https://api.github.com/repos/googleapis/java-spanner
|
closed
|
The build failed
|
buildcop: issue priority: p1 type: bug
|
This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: c13995faa228e98c9c7bf657a1252b853d24a896
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/aa7c8cd8-1c3d-40e3-a468-90d46a1ed2f2), [Sponge](http://sponge2/aa7c8cd8-1c3d-40e3-a468-90d46a1ed2f2)
status: failed
<details><summary>Test output</summary><br><pre>com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:210)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:56)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:150)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:93)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:120)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:93)
at com.google.cloud.spanner.it.ITBatchDmlTest.createDatabase(ITBatchDmlTest.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: java.util.concurrent.ExecutionException: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:564)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:545)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:86)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:115)
... 34 more
Caused by: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:210)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:56)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:225)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:143)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:93)
at com.google.cloud.spanner.DatabaseAdminClientImpl$9.apply(DatabaseAdminClientImpl.java:290)
at com.google.cloud.spanner.DatabaseAdminClientImpl$9.apply(DatabaseAdminClientImpl.java:287)
at com.google.api.core.ApiFutures$GaxFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:224)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:212)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:124)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:100)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:197)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:135)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:117)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:100)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1050)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
at io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at io.grpc.Status.asRuntimeException(Status.java:533)
... 18 more
</pre></details>
|
1.0
|
The build failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: c13995faa228e98c9c7bf657a1252b853d24a896
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/aa7c8cd8-1c3d-40e3-a468-90d46a1ed2f2), [Sponge](http://sponge2/aa7c8cd8-1c3d-40e3-a468-90d46a1ed2f2)
status: failed
<details><summary>Test output</summary><br><pre>com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:210)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:56)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:150)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:93)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:120)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:93)
at com.google.cloud.spanner.it.ITBatchDmlTest.createDatabase(ITBatchDmlTest.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: java.util.concurrent.ExecutionException: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:564)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:545)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:86)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.google.cloud.spanner.testing.RemoteSpannerHelper.createTestDatabase(RemoteSpannerHelper.java:115)
... 34 more
Caused by: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:210)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:56)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:225)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:143)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:93)
at com.google.cloud.spanner.DatabaseAdminClientImpl$9.apply(DatabaseAdminClientImpl.java:290)
at com.google.cloud.spanner.DatabaseAdminClientImpl$9.apply(DatabaseAdminClientImpl.java:287)
at com.google.api.core.ApiFutures$GaxFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:224)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:212)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:124)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:100)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:197)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:135)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:117)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:100)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1050)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
at io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: The instance projects/gcloud-devel/instances/spanner-testing has too many database splits to complete the operation. Please add more nodes and try again.
at io.grpc.Status.asRuntimeException(Status.java:533)
... 18 more
</pre></details>
|
non_defect
|
the build failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting commit buildurl status failed test output com google cloud spanner spannerexception resource exhausted com google cloud spanner spannerexception resource exhausted io grpc statusruntimeexception resource exhausted the instance projects gcloud devel instances spanner testing has too many database splits to complete the operation please add more nodes and try again at com google cloud spanner spannerexceptionfactory newspannerexceptionpreformatted spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner testing remotespannerhelper createtestdatabase remotespannerhelper java at com google cloud spanner testing remotespannerhelper createtestdatabase remotespannerhelper java at com google cloud spanner it itbatchdmltest createdatabase itbatchdmltest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runbefores invokemethod runbefores java at org junit internal runners statements runbefores evaluate runbefores java at org junit rules externalresource evaluate externalresource java at org junit rules runrules evaluate runrules java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java util concurrent executionexception com google cloud spanner spannerexception resource exhausted io grpc statusruntimeexception resource exhausted the instance projects gcloud devel instances spanner testing has too many database splits to complete the operation please add more nodes and try again at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com google cloud spanner testing remotespannerhelper createtestdatabase remotespannerhelper java more caused by com google cloud spanner spannerexception resource exhausted io grpc statusruntimeexception resource exhausted the instance projects gcloud devel instances spanner testing has too many database splits to complete the operation please add more nodes and try again at com google cloud spanner spannerexceptionfactory newspannerexceptionpreformatted spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory fromapiexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner databaseadminclientimpl apply databaseadminclientimpl java at com google cloud spanner databaseadminclientimpl apply databaseadminclientimpl java at com google api core apifutures gaxfunctiontoguavafunction apply apifutures java at com google common util concurrent abstractcatchingfuture catchingfuture dofallback abstractcatchingfuture java at com google common util concurrent abstractcatchingfuture catchingfuture dofallback abstractcatchingfuture java at com google common util concurrent abstractcatchingfuture run abstractcatchingfuture java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at com google common util concurrent abstracttransformfuture run abstracttransformfuture java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at com google api gax retrying basicretryingfuture handleattempt basicretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener handle callbackchainretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener run callbackchainretryingfuture java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at com google common util concurrent abstracttransformfuture run abstracttransformfuture java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at com google api core abstractapifuture internalsettablefuture setexception abstractapifuture java at com google api core abstractapifuture setexception abstractapifuture java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google cloud spanner spi spannererrorinterceptor onclose spannererrorinterceptor java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception resource exhausted the instance projects gcloud devel instances spanner testing has too many database splits to complete the operation please add more nodes and try again at io grpc status asruntimeexception status java more
| 0
|
12,535
| 2,704,787,750
|
IssuesEvent
|
2015-04-07 04:52:19
|
google/google-api-go-client
|
https://api.github.com/repos/google/google-api-go-client
|
closed
|
Folder 'examples' improperly named
|
new priority-medium type-defect
|
**rhill@raymondhill.net** on 3 Jan 2013 at 2:44:
```
Apparently, the name of the folder 'examples' is supposed to be 'go-api-demo'.
Currently, an executable named 'examples' in the bin folder describes its usage
as:
Usage: go-api-demo <api-demo-name> [api name args]
```
|
1.0
|
Folder 'examples' improperly named -
**rhill@raymondhill.net** on 3 Jan 2013 at 2:44:
```
Apparently, the name of the folder 'examples' is supposed to be 'go-api-demo'.
Currently, an executable named 'examples' in the bin folder describes its usage
as:
Usage: go-api-demo <api-demo-name> [api name args]
```
|
defect
|
folder examples improperly named rhill raymondhill net on jan at apparently the name of the folder examples is supposed to be go api demo currently an executable named examples in the bin folder describes its usage as usage go api demo
| 1
|
100,993
| 12,610,695,625
|
IssuesEvent
|
2020-06-12 05:47:53
|
ajency/Finaegis-Backend
|
https://api.github.com/repos/ajency/Finaegis-Backend
|
opened
|
PMS - UI and text issues
|
Android Application Design Issue Must do
|
**Describe the Issue**
1. [ ] Loaders appear stretched
2. [ ] Up arrow is blurry
3. [ ] Handle text for values like 99,00,000
4. [ ] Hide the sections -> Strategies and Analysis tabs, Portfolio analysis, Blue message boxes
5. [ ] Use commas for the amount eg 10,00,00
6. [ ] Text should be PMS Managed Portfolios instead of PMS Managed (use correct font and color)
7. [ ] Back arrow alignment is not the same as the previous page (refer Mutual funds page)
**Screenshots**

Hide


|
1.0
|
PMS - UI and text issues - **Describe the Issue**
1. [ ] Loaders appear stretched
2. [ ] Up arrow is blurry
3. [ ] Handle text for values like 99,00,000
4. [ ] Hide the sections -> Strategies and Analysis tabs, Portfolio analysis, Blue message boxes
5. [ ] Use commas for the amount eg 10,00,00
6. [ ] Text should be PMS Managed Portfolios instead of PMS Managed (use correct font and color)
7. [ ] Back arrow alignment is not the same as the previous page (refer Mutual funds page)
**Screenshots**

Hide


|
non_defect
|
pms ui and text issues describe the issue loaders appear stretched up arrow is blurry handle text for values like hide the sections strategies and analysis tabs portfolio analysis blue message boxes use commas for the amount eg text should be pms managed portfolios instead of pms managed use correct font and color back arrow alignment is not the same as the previous page refer mutual funds page screenshots hide
| 0
|
979
| 2,522,510,365
|
IssuesEvent
|
2015-01-19 22:45:04
|
tntim96/JSCover
|
https://api.github.com/repos/tntim96/JSCover
|
closed
|
JSCover-all.jar merge limits
|
bug Fix applied - please re-test
|
Where do you see the limit of the --merge functionality for reports?
I am trying to merge about 1900 test cases and I always get stuck with a Java OOM exception:
* Caused by: java.lang.OutOfMemoryError: Java heap space
I tried to merge only 100 test cases at a time and afterwards merge those subresults which also did not work.
Furthermore, I tried to increase the heap space for Java to up to 1024MB.
|
1.0
|
JSCover-all.jar merge limits - Where do you see the limit of the --merge functionality for reports?
I am trying to merge about 1900 test cases and I always get stuck with a Java OOM exception:
* Caused by: java.lang.OutOfMemoryError: Java heap space
I tried to merge only 100 test cases at a time and afterwards merge those subresults which also did not work.
Furthermore, I tried to increase the heap space for Java to up to 1024MB.
|
non_defect
|
jscover all jar merge limits where do you see the limit of the merge functionality for reports i am trying to merge about test cases and i always get stuck with a java oom exception caused by java lang outofmemoryerror java heap space i tried to merge only test cases at a time and afterwards merge those subresults which also did not work furthermore i tried to increase the heap space for java to up to
| 0
|
79,604
| 15,229,191,035
|
IssuesEvent
|
2021-02-18 12:34:14
|
GIScience/ohsome-api
|
https://api.github.com/repos/GIScience/ohsome-api
|
closed
|
Re-grouping of package structure for controller and output classes
|
brainstorming code quality comments welcome
|
When starting with the implementation of https://github.com/GIScience/ohsome-api/issues/115 I ran into the following issue: We have a `ContributionsController` under the `contributions` package and then other controller classes like `CountController`, `AreaController`, etc. under the `dataaggregation` package. Now I want to add a data aggregation endpoint to the `contributions`.
Since we are still working with contributions and need the contribution-view of the OSHDB here too, I'd suggest to add another package level.
**Current package structure:**
- controller
- contributions
- ContributionsController.java
- dataaggregation
- AreaController.java
- CountController.java
- ...
- UsersController.java
- metadata
- MetadataController.java
- rawdata
- ElementsController.java
- ...
- other helper classes
**Proposed new package structure:**
- controller
- contributions
- ContributionsController.java
- elements
- dataaggregation
- AreaController.java
- CountController.java
- ...
- features
- ElementsController.java
- ...
- users
- UsersController.java
- metadata
- MetadataController.java
- other helper classes
The new package structure would be in-line with our endpoint structure and in my opinion a better grouping. As the `aggregation functions` that can be applied on the `contributions` are rather limited, I'd suggest adding them to the `ContributionsController` (like we've done it with the `UsersController` as well) and not make another `CountController` inside the `contributions` package. Feedback appreciated.
|
1.0
|
Re-grouping of package structure for controller and output classes - When starting with the implementation of https://github.com/GIScience/ohsome-api/issues/115 I ran into the following issue: We have a `ContributionsController` under the `contributions` package and then other controller classes like `CountController`, `AreaController`, etc. under the `dataaggregation` package. Now I want to add a data aggregation endpoint to the `contributions`.
Since we are still working with contributions and need the contribution-view of the OSHDB here too, I'd suggest to add another package level.
**Current package structure:**
- controller
- contributions
- ContributionsController.java
- dataaggregation
- AreaController.java
- CountController.java
- ...
- UsersController.java
- metadata
- MetadataController.java
- rawdata
- ElementsController.java
- ...
- other helper classes
**Proposed new package structure:**
- controller
- contributions
- ContributionsController.java
- elements
- dataaggregation
- AreaController.java
- CountController.java
- ...
- features
- ElementsController.java
- ...
- users
- UsersController.java
- metadata
- MetadataController.java
- other helper classes
The new package structure would be in-line with our endpoint structure and in my opinion a better grouping. As the `aggregation functions` that can be applied on the `contributions` are rather limited, I'd suggest adding them to the `ContributionsController` (like we've done it with the `UsersController` as well) and not make another `CountController` inside the `contributions` package. Feedback appreciated.
|
non_defect
|
re grouping of package structure for controller and output classes when starting with the implementation of i ran into the following issue we have a contributionscontroller under the contributions package and then other controller classes like countcontroller areacontroller etc under the dataaggregation package now i want to add a data aggregation endpoint to the contributions since we are still working with contributions and need the contribution view of the oshdb here too i d suggest to add another package level current package structure controller contributions contributionscontroller java dataaggregation areacontroller java countcontroller java userscontroller java metadata metadatacontroller java rawdata elementscontroller java other helper classes proposed new package structure controller contributions contributionscontroller java elements dataaggregation areacontroller java countcontroller java features elementscontroller java users userscontroller java metadata metadatacontroller java other helper classes the new package structure would be in line with our endpoint structure and in my opinion a better grouping as the aggregation functions that can be applied on the contributions are rather limited i d suggest adding them to the contributionscontroller like we ve done it with the userscontroller as well and not make another countcontroller inside the contributions package feedback appreciated
| 0
|
141,109
| 12,957,671,514
|
IssuesEvent
|
2020-07-20 10:05:07
|
vuejs/vue-cli
|
https://api.github.com/repos/vuejs/vue-cli
|
closed
|
Merge plugin-related navigation items
|
contribution welcome documentation
|
### What problem does this feature solve?
The plugin development documentation sections are linked from a separate menu item at the top navigation, see _Plugin Dev Guide_. This makes it a) harder to use the top navigation on medium viewports and b) slower to locate the most relevant information regarding plugins or plugin development.
### What does the proposed change look like?
It could be worth considering merging the _Plugin Dev Guide_ with the _Plugins_ top navigation menu to provide more clear navigation and make the top navigation more accessible on small viewports. WDYT?
| Before (1) | Before (2) | After |
| ------------- |-------------|-----|
|  |  |  |
<!-- generated by vue-issues. DO NOT REMOVE -->
|
1.0
|
Merge plugin-related navigation items - ### What problem does this feature solve?
The plugin development documentation sections are linked from a separate menu item at the top navigation, see _Plugin Dev Guide_. This makes it a) harder to use the top navigation on medium viewports and b) slower to locate the most relevant information regarding plugins or plugin development.
### What does the proposed change look like?
It could be worth considering merging the _Plugin Dev Guide_ with the _Plugins_ top navigation menu to provide more clear navigation and make the top navigation more accessible on small viewports. WDYT?
| Before (1) | Before (2) | After |
| ------------- |-------------|-----|
|  |  |  |
<!-- generated by vue-issues. DO NOT REMOVE -->
|
non_defect
|
merge plugin related navigation items what problem does this feature solve the plugin development documentation sections are linked from a separate menu item at the top navigation see plugin dev guide this makes it a harder to use the top navigation on medium viewports and b slower to locate the most relevant information regarding plugins or plugin development what does the proposed change look like it could be worth considering merging the plugin dev guide with the plugins top navigation menu to provide more clear navigation and make the top navigation more accessible on small viewports wdyt before before after
| 0
|
69,560
| 22,498,194,030
|
IssuesEvent
|
2022-06-23 09:25:04
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: Error using user-data-dir= in Chrome 103.0.5060.53
|
R-awaiting answer I-defect
|
### What happened?
When using **chrome_options.add_argument('user-data-dir='** in Chrome 103 the error occurs:
Selenium.common.exceptions.WebDriverException: Message: unknown error: unexpected command response
(Session info: chrome=103.0.5060.53)
**if I remove this parameter, it works.**
### How can we reproduce the issue?
```shell
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('user-data-dir=C:\\Perfil')
driver_path = 'chromedriver.exe'
service = Service(driver_path)
driver = webdriver.Chrome(service=service,options=chrome_options)
driver.get("https://google.com")
```
### Relevant log output
```shell
elenium.common.exceptions.WebDriverException: Message: unknown error: unexpected command response
(Session info: chrome=103.0.5060.53)
Stacktrace:
Backtrace:
Ordinal0 [0x007E6463+2188387]
Ordinal0 [0x0077E461+1762401]
Ordinal0 [0x00693D78+802168]
Ordinal0 [0x00687210+750096]
Ordinal0 [0x0068675A+747354]
Ordinal0 [0x00685D3F+744767]
Ordinal0 [0x0068557C+742780]
Ordinal0 [0x00699BF3+826355]
Ordinal0 [0x006ECF6D+1167213]
Ordinal0 [0x006DC5F6+1099254]
Ordinal0 [0x006B6BE0+945120]
Ordinal0 [0x006B7AD6+948950]
GetHandleVerifier [0x00A871F2+2712546]
GetHandleVerifier [0x00A7886D+2652765]
GetHandleVerifier [0x0087002A+520730]
GetHandleVerifier [0x0086EE06+516086]
Ordinal0 [0x0078468B+1787531]
Ordinal0 [0x00788E88+1805960]
Ordinal0 [0x00788F75+1806197]
Ordinal0 [0x00791DF1+1842673]
BaseThreadInitThunk [0x75F7FA29+25]
RtlGetAppContainerNamedObjectPath [0x77987A9E+286]
RtlGetAppContainerNamedObjectPath [0x77987A6E+238]
```
### Operating System
Windows 10 and Manjaro
### Selenium version
Python (Selenium 4)
### What are the browser(s) and version(s) where you see this issue?
Chrome 103
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 103.0.5060.53
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: Error using user-data-dir= in Chrome 103.0.5060.53 - ### What happened?
When using **chrome_options.add_argument('user-data-dir='** in Chrome 103 the error occurs:
Selenium.common.exceptions.WebDriverException: Message: unknown error: unexpected command response
(Session info: chrome=103.0.5060.53)
**if I remove this parameter, it works.**
### How can we reproduce the issue?
```shell
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('user-data-dir=C:\\Perfil')
driver_path = 'chromedriver.exe'
service = Service(driver_path)
driver = webdriver.Chrome(service=service,options=chrome_options)
driver.get("https://google.com")
```
### Relevant log output
```shell
elenium.common.exceptions.WebDriverException: Message: unknown error: unexpected command response
(Session info: chrome=103.0.5060.53)
Stacktrace:
Backtrace:
Ordinal0 [0x007E6463+2188387]
Ordinal0 [0x0077E461+1762401]
Ordinal0 [0x00693D78+802168]
Ordinal0 [0x00687210+750096]
Ordinal0 [0x0068675A+747354]
Ordinal0 [0x00685D3F+744767]
Ordinal0 [0x0068557C+742780]
Ordinal0 [0x00699BF3+826355]
Ordinal0 [0x006ECF6D+1167213]
Ordinal0 [0x006DC5F6+1099254]
Ordinal0 [0x006B6BE0+945120]
Ordinal0 [0x006B7AD6+948950]
GetHandleVerifier [0x00A871F2+2712546]
GetHandleVerifier [0x00A7886D+2652765]
GetHandleVerifier [0x0087002A+520730]
GetHandleVerifier [0x0086EE06+516086]
Ordinal0 [0x0078468B+1787531]
Ordinal0 [0x00788E88+1805960]
Ordinal0 [0x00788F75+1806197]
Ordinal0 [0x00791DF1+1842673]
BaseThreadInitThunk [0x75F7FA29+25]
RtlGetAppContainerNamedObjectPath [0x77987A9E+286]
RtlGetAppContainerNamedObjectPath [0x77987A6E+238]
```
### Operating System
Windows 10 and Manjaro
### Selenium version
Python (Selenium 4)
### What are the browser(s) and version(s) where you see this issue?
Chrome 103
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 103.0.5060.53
### Are you using Selenium Grid?
_No response_
|
defect
|
error using user data dir in chrome what happened when using chrome options add argument user data dir in chrome the error occurs selenium common exceptions webdriverexception message unknown error unexpected command response session info chrome if i remove this parameter it works how can we reproduce the issue shell from selenium import webdriver from selenium webdriver chrome service import service chrome options webdriver chromeoptions chrome options add argument user data dir c perfil driver path chromedriver exe service service driver path driver webdriver chrome service service options chrome options driver get relevant log output shell elenium common exceptions webdriverexception message unknown error unexpected command response session info chrome stacktrace backtrace gethandleverifier gethandleverifier gethandleverifier gethandleverifier basethreadinitthunk rtlgetappcontainernamedobjectpath rtlgetappcontainernamedobjectpath operating system windows and manjaro selenium version python selenium what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response
| 1
|
288,033
| 31,856,926,185
|
IssuesEvent
|
2023-09-15 08:09:42
|
nidhi7598/linux-4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564
|
closed
|
CVE-2023-3772 (Medium) detected in linuxlinux-4.19.294 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2023-3772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/xfrm/xfrm_user.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel’s IP framework for transforming packets (XFRM subsystem). This issue may allow a malicious user with CAP_NET_ADMIN privileges to directly dereference a NULL pointer in xfrm_update_ae_params(), leading to a possible kernel crash and denial of service.
<p>Publish Date: 2023-07-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3772>CVE-2023-3772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3772">https://www.linuxkernelcves.com/cves/CVE-2023-3772</a></p>
<p>Release Date: 2023-07-25</p>
<p>Fix Resolution: v6.1.47,v6.4.12,v6.5-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-3772 (Medium) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2023-3772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/xfrm/xfrm_user.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel’s IP framework for transforming packets (XFRM subsystem). This issue may allow a malicious user with CAP_NET_ADMIN privileges to directly dereference a NULL pointer in xfrm_update_ae_params(), leading to a possible kernel crash and denial of service.
<p>Publish Date: 2023-07-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3772>CVE-2023-3772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3772">https://www.linuxkernelcves.com/cves/CVE-2023-3772</a></p>
<p>Release Date: 2023-07-25</p>
<p>Fix Resolution: v6.1.47,v6.4.12,v6.5-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net xfrm xfrm user c vulnerability details a flaw was found in the linux kernel’s ip framework for transforming packets xfrm subsystem this issue may allow a malicious user with cap net admin privileges to directly dereference a null pointer in xfrm update ae params leading to a possible kernel crash and denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
183,420
| 14,941,183,397
|
IssuesEvent
|
2021-01-25 19:22:42
|
olivertwistor/currency-manager
|
https://api.github.com/repos/olivertwistor/currency-manager
|
closed
|
Fix labels
|
core documentation
|
Fix labels according to olivertwistor/olivertwistor-project-model#29, olivertwistor/olivertwistor-project-model#31 and olivertwistor/olivertwistor-project-model#35.
|
1.0
|
Fix labels - Fix labels according to olivertwistor/olivertwistor-project-model#29, olivertwistor/olivertwistor-project-model#31 and olivertwistor/olivertwistor-project-model#35.
|
non_defect
|
fix labels fix labels according to olivertwistor olivertwistor project model olivertwistor olivertwistor project model and olivertwistor olivertwistor project model
| 0
|
319,696
| 23,785,729,934
|
IssuesEvent
|
2022-09-02 09:53:27
|
snakypy/dotctrl
|
https://api.github.com/repos/snakypy/dotctrl
|
closed
|
Improve REAME.md
|
documentation
|
* Improve the description of Dotctrl.
* Add a gif showing how Dotctrl works.
|
1.0
|
Improve REAME.md - * Improve the description of Dotctrl.
* Add a gif showing how Dotctrl works.
|
non_defect
|
improve reame md improve the description of dotctrl add a gif showing how dotctrl works
| 0
|
305,232
| 9,366,414,236
|
IssuesEvent
|
2019-04-03 00:41:17
|
magda-io/magda
|
https://api.github.com/repos/magda-io/magda
|
closed
|
Date filter breaks order by quality
|
bug priority: high
|
### Problem description
When filtering by date, there's no dataset quality order being applied to the datasets. It still seems to work for other filters.
### Problem reproduction steps
1. Go to search
2. Filter by whatever the existing min/max are (e.g. https://data.gov.au/search?dateFrom=1492-01-31T13%3A55%3A08.000Z&dateTo=2998-03-31T13%3A00%3A00.000Z)
3. Note that the first page is probably all 0-star datasets
4. Select a 3 star format (e.g. WMS)
5. Note that there are in fact 3 star datasets that come up
|
1.0
|
Date filter breaks order by quality - ### Problem description
When filtering by date, there's no dataset quality order being applied to the datasets. It still seems to work for other filters.
### Problem reproduction steps
1. Go to search
2. Filter by whatever the existing min/max are (e.g. https://data.gov.au/search?dateFrom=1492-01-31T13%3A55%3A08.000Z&dateTo=2998-03-31T13%3A00%3A00.000Z)
3. Note that the first page is probably all 0-star datasets
4. Select a 3 star format (e.g. WMS)
5. Note that there are in fact 3 star datasets that come up
|
non_defect
|
date filter breaks order by quality problem description when filtering by date there s no dataset quality order being applied to the datasets it still seems to work for other filters problem reproduction steps go to search filter by whatever the existing min max are e g note that the first page is probably all star datasets select a star format e g wms note that there are in fact star datasets that come up
| 0
|
304,433
| 23,065,780,758
|
IssuesEvent
|
2022-07-25 13:49:22
|
RoyalHaskoningDHV/sam
|
https://api.github.com/repos/RoyalHaskoningDHV/sam
|
opened
|
Update documentation for release
|
Priority: High Type: Documentation
|
We should update the documentation for release and also fix the autodoc functionality, since currently it seems broken on: https://sam-rhdhv.readthedocs.io/en/latest/
- [ ] Move important information from General documents to a new "Introduction" section
- [ ] Remove General documents section
- [ ] Replace example notebooks with the new examples
- [ ] Fix autodoc errors so the docs work on readthedocs
|
1.0
|
Update documentation for release - We should update the documentation for release and also fix the autodoc functionality, since currently it seems broken on: https://sam-rhdhv.readthedocs.io/en/latest/
- [ ] Move important information from General documents to a new "Introduction" section
- [ ] Remove General documents section
- [ ] Replace example notebooks with the new examples
- [ ] Fix autodoc errors so the docs work on readthedocs
|
non_defect
|
update documentation for release we should update the documentation for release and also fix the autodoc functionality since currently it seems broken on move important information from general documents to a new introduction section remove general documents section replace example notebooks with the new examples fix autodoc errors so the docs work on readthedocs
| 0
|
77,924
| 10,025,633,568
|
IssuesEvent
|
2019-07-17 03:06:56
|
RagtagOpen/freefrom-compensation-api
|
https://api.github.com/repos/RagtagOpen/freefrom-compensation-api
|
closed
|
Replace README
|
documentation good first issue
|
- [x] Add a description of the project
- [x] Link to CONTRIBUTING.md
- [x] Document how to build the project locally, including any environment variables that need to be set
|
1.0
|
Replace README - - [x] Add a description of the project
- [x] Link to CONTRIBUTING.md
- [x] Document how to build the project locally, including any environment variables that need to be set
|
non_defect
|
replace readme add a description of the project link to contributing md document how to build the project locally including any environment variables that need to be set
| 0
|
51,491
| 21,692,031,027
|
IssuesEvent
|
2022-05-09 16:12:42
|
adrianocortes/TrinusTech
|
https://api.github.com/repos/adrianocortes/TrinusTech
|
closed
|
🛑 prodapitedservice is down
|
status prodapitedservice
|
In [`a743c2b`](https://github.com/adrianocortes/TrinusTech/commit/a743c2b8afa1996c6f4f2923327b0789fee29365
), prodapitedservice (https://prodapitedservice.azurewebsites.net/swagger/index.html) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 prodapitedservice is down - In [`a743c2b`](https://github.com/adrianocortes/TrinusTech/commit/a743c2b8afa1996c6f4f2923327b0789fee29365
), prodapitedservice (https://prodapitedservice.azurewebsites.net/swagger/index.html) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
non_defect
|
🛑 prodapitedservice is down in prodapitedservice was down http code response time ms
| 0
|
226,506
| 7,520,185,590
|
IssuesEvent
|
2018-04-12 13:51:38
|
ArmaAchilles/Achilles
|
https://api.github.com/repos/ArmaAchilles/Achilles
|
opened
|
ACE3 & Achilles: Ensure a Consistent Modification of Vanilla Modules
|
3rd party mod priority/medium review
|
**Arma 3 Version:** `1.82` (stable)
**CBA Version:** `3.x.x` (stable)
**ACE3 Version:** `3.12.1` (stable)
**Achilles:** `1.0.2` (stable)
**Mods:**
```
- CBA_A3
- ace
- Achilles
```
**Description:**
- ACE3 as well as Achilles are modifying the same vanilla modules in a few cases. This is a review thread to discuss changes in both mods in order to ensure a consistent behaviour of these modules with and without the other mod.
**Active Thread:**
- acemod/ACE3/#5957
|
1.0
|
ACE3 & Achilles: Ensure a Consistent Modification of Vanilla Modules - **Arma 3 Version:** `1.82` (stable)
**CBA Version:** `3.x.x` (stable)
**ACE3 Version:** `3.12.1` (stable)
**Achilles:** `1.0.2` (stable)
**Mods:**
```
- CBA_A3
- ace
- Achilles
```
**Description:**
- ACE3 as well as Achilles are modifying the same vanilla modules in a few cases. This is a review thread to discuss changes in both mods in order to ensure a consistent behaviour of these modules with and without the other mod.
**Active Thread:**
- acemod/ACE3/#5957
|
non_defect
|
achilles ensure a consistent modification of vanilla modules arma version stable cba version x x stable version stable achilles stable mods cba ace achilles description as well as achilles are modifying the same vanilla modules in a few cases this is a review thread to discuss changes in both mods in order to ensure a consistent behaviour of these modules with and without the other mod active thread acemod
| 0
|
176,152
| 6,556,895,459
|
IssuesEvent
|
2017-09-06 15:31:29
|
strongloop/loopback
|
https://api.github.com/repos/strongloop/loopback
|
closed
|
Allow event streams to send ping data
|
feature needs-priority patch-welcome stale
|
# Problem
Some providers (like Heroku) need long-polling HTTP streams to send data every x seconds because they have hardcoded timeouts on the requests. Heroku needs a response within 30 seconds and after that within 50 seconds. Because of this, the event-stream is completely useless without a ping event.
# Workaround
I copied the event-stream function from the PersistedModel and added a simple timer function to send a null byte every 20 seconds.
Note: I have no idea if sending a null byte is a good idea. It feels dirty but it works?
Code:
``` javascript
var PassThrough = require('stream').PassThrough;
module.exports = function(ModelName) {
ModelName.stream = function(options, cb) {
if (typeof options === 'function') {
cb = options;
options = undefined;
}
var idName = this.getIdName();
var Model = this;
var changes = new PassThrough({
objectMode: true
});
var writeable = true;
var pingTimer = setInterval(function() {
if (writeable) {
changes.write("\0");
}
}, 20 * 1e3);
changes.destroy = function() {
changes.removeAllListeners('error');
changes.removeAllListeners('end');
writeable = false;
changes = null;
clearInterval(pingTimer);
};
changes.on('error', function() {
writeable = false;
});
changes.on('end', function() {
writeable = false;
});
process.nextTick(function() {
cb(null, changes);
});
Model.observe('after save', createChangeHandler('save'));
Model.observe('after delete', createChangeHandler('delete'));
function createChangeHandler(type) {
return function(ctx, next) {
// since it might have set to null via destroy
if (!changes) {
return next();
}
var where = ctx.where;
var data = ctx.instance || ctx.data;
var whereId = where && where[idName];
// the data includes the id
// or the where includes the id
var target;
if (data && (data[idName] || data[idName] === 0)) {
target = data[idName];
} else if (where && (where[idName] || where[idName] === 0)) {
target = where[idName];
}
var hasTarget = target === 0 || !!target;
var change = {
target: target,
where: where,
data: data
};
switch (type) {
case 'save':
if (ctx.isNewInstance === undefined) {
change.type = hasTarget ? 'update' : 'create';
} else {
change.type = ctx.isNewInstance ? 'create' : 'update';
}
break;
case 'delete':
change.type = 'remove';
break;
}
// TODO(ritch) this is ugly... maybe a ReadableStream would be better
if (writeable) {
changes.write(change);
}
next();
};
}
};
ModelName.remoteMethod('stream', {
description: 'Create a get stream with a 20 second ping',
accessType: 'READ',
http: [{
verb: 'get',
path: '/stream'
}],
accepts: {
arg: 'options',
type: 'object'
},
returns: {
arg: 'changes',
type: 'ReadableStream',
json: true
}
});
};
```
Even though this all works fine it would better (in my opinion) if the change streams would allow you to set an option to enable a ping. The `options` flag is currently not used in the function.
Would like to hear your thoughts and a simple review of the proposed workaround.
|
1.0
|
Allow event streams to send ping data - # Problem
Some providers (like Heroku) need long-polling HTTP streams to send data every x seconds because they have hardcoded timeouts on the requests. Heroku needs a response within 30 seconds and after that within 50 seconds. Because of this, the event-stream is completely useless without a ping event.
# Workaround
I copied the event-stream function from the PersistedModel and added a simple timer function to send a null byte every 20 seconds.
Note: I have no idea if sending a null byte is a good idea. It feels dirty but it works?
Code:
``` javascript
var PassThrough = require('stream').PassThrough;
module.exports = function(ModelName) {
ModelName.stream = function(options, cb) {
if (typeof options === 'function') {
cb = options;
options = undefined;
}
var idName = this.getIdName();
var Model = this;
var changes = new PassThrough({
objectMode: true
});
var writeable = true;
var pingTimer = setInterval(function() {
if (writeable) {
changes.write("\0");
}
}, 20 * 1e3);
changes.destroy = function() {
changes.removeAllListeners('error');
changes.removeAllListeners('end');
writeable = false;
changes = null;
clearInterval(pingTimer);
};
changes.on('error', function() {
writeable = false;
});
changes.on('end', function() {
writeable = false;
});
process.nextTick(function() {
cb(null, changes);
});
Model.observe('after save', createChangeHandler('save'));
Model.observe('after delete', createChangeHandler('delete'));
function createChangeHandler(type) {
return function(ctx, next) {
// since it might have set to null via destroy
if (!changes) {
return next();
}
var where = ctx.where;
var data = ctx.instance || ctx.data;
var whereId = where && where[idName];
// the data includes the id
// or the where includes the id
var target;
if (data && (data[idName] || data[idName] === 0)) {
target = data[idName];
} else if (where && (where[idName] || where[idName] === 0)) {
target = where[idName];
}
var hasTarget = target === 0 || !!target;
var change = {
target: target,
where: where,
data: data
};
switch (type) {
case 'save':
if (ctx.isNewInstance === undefined) {
change.type = hasTarget ? 'update' : 'create';
} else {
change.type = ctx.isNewInstance ? 'create' : 'update';
}
break;
case 'delete':
change.type = 'remove';
break;
}
// TODO(ritch) this is ugly... maybe a ReadableStream would be better
if (writeable) {
changes.write(change);
}
next();
};
}
};
ModelName.remoteMethod('stream', {
description: 'Create a get stream with a 20 second ping',
accessType: 'READ',
http: [{
verb: 'get',
path: '/stream'
}],
accepts: {
arg: 'options',
type: 'object'
},
returns: {
arg: 'changes',
type: 'ReadableStream',
json: true
}
});
};
```
Even though this all works fine it would better (in my opinion) if the change streams would allow you to set an option to enable a ping. The `options` flag is currently not used in the function.
Would like to hear your thoughts and a simple review of the proposed workaround.
|
non_defect
|
allow event streams to send ping data problem some providers like heroku need long polling http streams to send data every x seconds because they have hardcoded timeouts on the requests heroku needs a response within seconds and after that within seconds because of this the event stream is completely useless without a ping event workaround i copied the event stream function from the persistedmodel and added a simple timer function to send a null byte every seconds note i have no idea if sending a null byte is a good idea it feels dirty but it works code javascript var passthrough require stream passthrough module exports function modelname modelname stream function options cb if typeof options function cb options options undefined var idname this getidname var model this var changes new passthrough objectmode true var writeable true var pingtimer setinterval function if writeable changes write changes destroy function changes removealllisteners error changes removealllisteners end writeable false changes null clearinterval pingtimer changes on error function writeable false changes on end function writeable false process nexttick function cb null changes model observe after save createchangehandler save model observe after delete createchangehandler delete function createchangehandler type return function ctx next since it might have set to null via destroy if changes return next var where ctx where var data ctx instance ctx data var whereid where where the data includes the id or the where includes the id var target if data data data target data else if where where where target where var hastarget target target var change target target where where data data switch type case save if ctx isnewinstance undefined change type hastarget update create else change type ctx isnewinstance create update break case delete change type remove break todo ritch this is ugly maybe a readablestream would be better if writeable changes write change next modelname remotemethod stream description create a get stream with a second ping accesstype read http verb get path stream accepts arg options type object returns arg changes type readablestream json true even though this all works fine it would better in my opinion if the change streams would allow you to set an option to enable a ping the options flag is currently not used in the function would like to hear your thoughts and a simple review of the proposed workaround
| 0
|
592,380
| 17,877,082,138
|
IssuesEvent
|
2021-09-07 06:14:42
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
Error in crc start
|
kind/bug priority/minor os/windows status/need more information status/stale
|
Please, would anyone help me? Ive got this after ./crc start in Windows 10 Pro:
```
PS C:\temp> ./crc start --log-level debug
DEBU CodeReady Containers version: 1.27.0+3d6bc39d
DEBU OpenShift version: 4.7.11 (embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 17178800128 bytes
DEBU No new version available. The latest version is 1.27.0
Is 'crc daemon' running? Cannot reach daemon API: Get "http://unix/api/version": open \\.\pipe\crc-http: O sistema não pode encontrar o arquivo especificado.
PS C:\temp>
```
|
1.0
|
Error in crc start - Please, would anyone help me? Ive got this after ./crc start in Windows 10 Pro:
```
PS C:\temp> ./crc start --log-level debug
DEBU CodeReady Containers version: 1.27.0+3d6bc39d
DEBU OpenShift version: 4.7.11 (embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 17178800128 bytes
DEBU No new version available. The latest version is 1.27.0
Is 'crc daemon' running? Cannot reach daemon API: Get "http://unix/api/version": open \\.\pipe\crc-http: O sistema não pode encontrar o arquivo especificado.
PS C:\temp>
```
|
non_defect
|
error in crc start please would anyone help me ive got this after crc start in windows pro ps c temp crc start log level debug debu codeready containers version debu openshift version embedded in executable debu running crc start debu total memory of system is bytes debu no new version available the latest version is is crc daemon running cannot reach daemon api get open pipe crc http o sistema não pode encontrar o arquivo especificado ps c temp
| 0
|
616,372
| 19,300,835,615
|
IssuesEvent
|
2021-12-13 05:12:18
|
dzuybt/tbi
|
https://api.github.com/repos/dzuybt/tbi
|
opened
|
Search doesn't work on expenses and revenues page
|
bug high-priority
|
When we try to search for something and hit enter, the system returns an error
|
1.0
|
Search doesn't work on expenses and revenues page - When we try to search for something and hit enter, the system returns an error
|
non_defect
|
search doesn t work on expenses and revenues page when we try to search for something and hit enter the system returns an error
| 0
|
375,334
| 11,102,829,207
|
IssuesEvent
|
2019-12-17 01:30:53
|
kartevonmorgen/kartevonmorgen
|
https://api.github.com/repos/kartevonmorgen/kartevonmorgen
|
closed
|
Landing page: show map when clicking on gray map
|
12 low-priority
|
when clicking on the gray map on the landing page one could be redirected to the normal map view.
|
1.0
|
Landing page: show map when clicking on gray map - when clicking on the gray map on the landing page one could be redirected to the normal map view.
|
non_defect
|
landing page show map when clicking on gray map when clicking on the gray map on the landing page one could be redirected to the normal map view
| 0
|
30,096
| 6,022,359,927
|
IssuesEvent
|
2017-06-07 20:53:05
|
CenturyLinkCloud/mdw
|
https://api.github.com/repos/CenturyLinkCloud/mdw
|
closed
|
Deleting a jar asset causes runtime problems
|
defect
|
Recently when we renamed a jar to mariaDB4j-db-osx-2.1.9.jar, after Git import we get the following error on server startup. Apparently we don't handle well the situation where a jar asset exists in the Archive but not in the current assets.
```
java.io.FileNotFoundException: /opt/apache-tomcat-8/sd-workflow/assets/com/centurylink/mdw/db/mariaDB4j-db-mac-2.1.9.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at com.centurylink.mdw.cloud.CloudClassLoader.findInJarFile(CloudClassLoader.java:205)
at com.centurylink.mdw.cloud.CloudClassLoader.findInJarAssets(CloudClassLoader.java:240)
at com.centurylink.mdw.cloud.CloudClassLoader.hasClass(CloudClassLoader.java:265)
at com.centurylink.mdw.model.workflow.Package.getActivityImplementor(Package.java:507)
at com.centurylink.mdw.services.process.ProcessExecutorImpl.prepareActivityInstance(ProcessExecutorImpl.java:720)
at com.centurylink.mdw.services.process.ProcessExecutor.prepareActivityInstance(ProcessExecutor.java:230)
at com.centurylink.mdw.services.process.ProcessEngineDriver.executeActivity(ProcessEngineDriver.java:332)
at com.centurylink.mdw.services.process.ProcessEngineDriver.processEvent(ProcessEngineDriver.java:612)
at com.centurylink.mdw.services.process.ProcessEngineDriver.executeServiceProcess(ProcessEngineDriver.java:830)
at com.centurylink.mdw.services.process.ProcessEngineDriver.invokeService(ProcessEngineDriver.java:741)
at com.centurylink.mdw.services.process.ProcessEngineDriver.invokeService(ProcessEngineDriver.java:679)
at com.centurylink.mdw.services.workflow.WorkflowServicesImpl.invokeServiceProcess(WorkflowServicesImpl.java:913)
```
|
1.0
|
Deleting a jar asset causes runtime problems - Recently when we renamed a jar to mariaDB4j-db-osx-2.1.9.jar, after Git import we get the following error on server startup. Apparently we don't handle well the situation where a jar asset exists in the Archive but not in the current assets.
```
java.io.FileNotFoundException: /opt/apache-tomcat-8/sd-workflow/assets/com/centurylink/mdw/db/mariaDB4j-db-mac-2.1.9.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at com.centurylink.mdw.cloud.CloudClassLoader.findInJarFile(CloudClassLoader.java:205)
at com.centurylink.mdw.cloud.CloudClassLoader.findInJarAssets(CloudClassLoader.java:240)
at com.centurylink.mdw.cloud.CloudClassLoader.hasClass(CloudClassLoader.java:265)
at com.centurylink.mdw.model.workflow.Package.getActivityImplementor(Package.java:507)
at com.centurylink.mdw.services.process.ProcessExecutorImpl.prepareActivityInstance(ProcessExecutorImpl.java:720)
at com.centurylink.mdw.services.process.ProcessExecutor.prepareActivityInstance(ProcessExecutor.java:230)
at com.centurylink.mdw.services.process.ProcessEngineDriver.executeActivity(ProcessEngineDriver.java:332)
at com.centurylink.mdw.services.process.ProcessEngineDriver.processEvent(ProcessEngineDriver.java:612)
at com.centurylink.mdw.services.process.ProcessEngineDriver.executeServiceProcess(ProcessEngineDriver.java:830)
at com.centurylink.mdw.services.process.ProcessEngineDriver.invokeService(ProcessEngineDriver.java:741)
at com.centurylink.mdw.services.process.ProcessEngineDriver.invokeService(ProcessEngineDriver.java:679)
at com.centurylink.mdw.services.workflow.WorkflowServicesImpl.invokeServiceProcess(WorkflowServicesImpl.java:913)
```
|
defect
|
deleting a jar asset causes runtime problems recently when we renamed a jar to db osx jar after git import we get the following error on server startup apparently we don t handle well the situation where a jar asset exists in the archive but not in the current assets java io filenotfoundexception opt apache tomcat sd workflow assets com centurylink mdw db db mac jar no such file or directory at java util zip zipfile open native method at java util zip zipfile zipfile java at java util zip zipfile zipfile java at java util jar jarfile jarfile java at java util jar jarfile jarfile java at com centurylink mdw cloud cloudclassloader findinjarfile cloudclassloader java at com centurylink mdw cloud cloudclassloader findinjarassets cloudclassloader java at com centurylink mdw cloud cloudclassloader hasclass cloudclassloader java at com centurylink mdw model workflow package getactivityimplementor package java at com centurylink mdw services process processexecutorimpl prepareactivityinstance processexecutorimpl java at com centurylink mdw services process processexecutor prepareactivityinstance processexecutor java at com centurylink mdw services process processenginedriver executeactivity processenginedriver java at com centurylink mdw services process processenginedriver processevent processenginedriver java at com centurylink mdw services process processenginedriver executeserviceprocess processenginedriver java at com centurylink mdw services process processenginedriver invokeservice processenginedriver java at com centurylink mdw services process processenginedriver invokeservice processenginedriver java at com centurylink mdw services workflow workflowservicesimpl invokeserviceprocess workflowservicesimpl java
| 1
|
443,609
| 12,796,807,350
|
IssuesEvent
|
2020-07-02 11:08:27
|
Blosc/c-blosc2
|
https://api.github.com/repos/Blosc/c-blosc2
|
closed
|
Encode runs in blocks/splits
|
enhancement high priority
|
In many situations, but specially when using the shuffle filter, one can find whole blocks/splits that are filled with a repeated single value. Allowing a way to encode this in the block/split offset, or in a few bytes (1 or 2) where the offset points to could represent an important win.
|
1.0
|
Encode runs in blocks/splits - In many situations, but specially when using the shuffle filter, one can find whole blocks/splits that are filled with a repeated single value. Allowing a way to encode this in the block/split offset, or in a few bytes (1 or 2) where the offset points to could represent an important win.
|
non_defect
|
encode runs in blocks splits in many situations but specially when using the shuffle filter one can find whole blocks splits that are filled with a repeated single value allowing a way to encode this in the block split offset or in a few bytes or where the offset points to could represent an important win
| 0
|
146,584
| 19,409,123,861
|
IssuesEvent
|
2021-12-20 07:19:06
|
REInVent650/weekendview
|
https://api.github.com/repos/REInVent650/weekendview
|
opened
|
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz
|
security vulnerability
|
## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: weekendview/package.json</p>
<p>Path to vulnerable library: weekendview/node_modules/postcss-normalize-url/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.12.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: weekendview/package.json</p>
<p>Path to vulnerable library: weekendview/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.12.tgz (Root Library)
- mini-css-extract-plugin-0.9.0.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/REInVent650/weekendview/commit/d4c1fcaaca8fbd3712e7c96ff41a6106f2d0b817">d4c1fcaaca8fbd3712e7c96ff41a6106f2d0b817</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz - ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: weekendview/package.json</p>
<p>Path to vulnerable library: weekendview/node_modules/postcss-normalize-url/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.12.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: weekendview/package.json</p>
<p>Path to vulnerable library: weekendview/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.12.tgz (Root Library)
- mini-css-extract-plugin-0.9.0.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/REInVent650/weekendview/commit/d4c1fcaaca8fbd3712e7c96ff41a6106f2d0b817">d4c1fcaaca8fbd3712e7c96ff41a6106f2d0b817</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in normalize url tgz normalize url tgz cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file weekendview package json path to vulnerable library weekendview node modules postcss normalize url node modules normalize url package json dependency hierarchy cli service tgz root library cssnano tgz cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file weekendview package json path to vulnerable library weekendview node modules normalize url package json dependency hierarchy cli service tgz root library mini css extract plugin tgz x normalize url tgz vulnerable library found in head commit a href found in base branch main vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url step up your open source security game with whitesource
| 0
|
216
| 2,519,740,352
|
IssuesEvent
|
2015-01-18 09:01:37
|
mbunkus/mtx-trac-import-test
|
https://api.github.com/repos/mbunkus/mtx-trac-import-test
|
closed
|
dummy ticket for migration
|
C: zztest P: low R: fixed T: defect
|
**Reported by moritz on 6 Aug 2003 10:29 UTC**
This is a dummy ticket created during the migration from Trac to GitHub so that the new issue numbers will stay in sync with the old ticket numbers.
|
1.0
|
dummy ticket for migration - **Reported by moritz on 6 Aug 2003 10:29 UTC**
This is a dummy ticket created during the migration from Trac to GitHub so that the new issue numbers will stay in sync with the old ticket numbers.
|
defect
|
dummy ticket for migration reported by moritz on aug utc this is a dummy ticket created during the migration from trac to github so that the new issue numbers will stay in sync with the old ticket numbers
| 1
|
182,079
| 6,666,642,882
|
IssuesEvent
|
2017-10-03 09:09:56
|
PlasmaPy/PlasmaPy
|
https://api.github.com/repos/PlasmaPy/PlasmaPy
|
opened
|
Add bindings to functions from `physics` or `atomic` in `Plasma` and `Species` classes
|
Effort: high Priority: low Programming
|
Eventually we'd probably like to be able to define a plasma, run some simulations on it or whatever, then do `plasma.Alfven_speed()` instead of having to go through `physics.Alfven_speed(plasma.B, plasma.rho)`.
This is pretty far off as `Plasma` and `Species` are still crystallizing, but I wanted to note it down for later.
|
1.0
|
Add bindings to functions from `physics` or `atomic` in `Plasma` and `Species` classes - Eventually we'd probably like to be able to define a plasma, run some simulations on it or whatever, then do `plasma.Alfven_speed()` instead of having to go through `physics.Alfven_speed(plasma.B, plasma.rho)`.
This is pretty far off as `Plasma` and `Species` are still crystallizing, but I wanted to note it down for later.
|
non_defect
|
add bindings to functions from physics or atomic in plasma and species classes eventually we d probably like to be able to define a plasma run some simulations on it or whatever then do plasma alfven speed instead of having to go through physics alfven speed plasma b plasma rho this is pretty far off as plasma and species are still crystallizing but i wanted to note it down for later
| 0
|
198,075
| 22,617,874,932
|
IssuesEvent
|
2022-06-30 01:18:11
|
vipinsun/cactus
|
https://api.github.com/repos/vipinsun/cactus
|
opened
|
CVE-2022-2218 (High) detected in parse-url-6.0.0.tgz
|
security vulnerability
|
## CVE-2022-2218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-4.0.0.tgz (Root Library)
- version-4.0.0.tgz
- github-client-4.0.0.tgz
- git-url-parse-11.6.0.tgz
- git-up-4.0.5.tgz
- :x: **parse-url-6.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site Scripting (XSS) - Stored in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2218>CVE-2022-2218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/024912d3-f103-4daf-a1d0-567f4d9f2bf5/">https://huntr.dev/bounties/024912d3-f103-4daf-a1d0-567f4d9f2bf5/</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: parse-url - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-2218 (High) detected in parse-url-6.0.0.tgz - ## CVE-2022-2218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-4.0.0.tgz (Root Library)
- version-4.0.0.tgz
- github-client-4.0.0.tgz
- git-url-parse-11.6.0.tgz
- git-up-4.0.5.tgz
- :x: **parse-url-6.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site Scripting (XSS) - Stored in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2218>CVE-2022-2218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/024912d3-f103-4daf-a1d0-567f4d9f2bf5/">https://huntr.dev/bounties/024912d3-f103-4daf-a1d0-567f4d9f2bf5/</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: parse-url - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in parse url tgz cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url package json dependency hierarchy lerna tgz root library version tgz github client tgz git url parse tgz git up tgz x parse url tgz vulnerable library found in base branch master vulnerability details cross site scripting xss stored in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url step up your open source security game with mend
| 0
|
435,390
| 12,534,982,535
|
IssuesEvent
|
2020-06-04 20:28:46
|
markfarrell/3tier
|
https://api.github.com/repos/markfarrell/3tier
|
closed
|
Naming Convention
|
PRIORITY:NONE
|
*June 4th, 2020: closed for revision at a later date.*
Modules prefixed with:
* `Data` should only export type definitions and common type class instances for those definitions.
* `Effect` should only export functions that produce an `Effect a`, e.g. for a randomly-generated `a`.
* `FFI` should only export functions with a `foreign import` from corresponding `*.js`file.
* `Text.Parsing` should only export functions that produce a `Parser a m b`.
* ...
|
1.0
|
Naming Convention - *June 4th, 2020: closed for revision at a later date.*
Modules prefixed with:
* `Data` should only export type definitions and common type class instances for those definitions.
* `Effect` should only export functions that produce an `Effect a`, e.g. for a randomly-generated `a`.
* `FFI` should only export functions with a `foreign import` from corresponding `*.js`file.
* `Text.Parsing` should only export functions that produce a `Parser a m b`.
* ...
|
non_defect
|
naming convention june closed for revision at a later date modules prefixed with data should only export type definitions and common type class instances for those definitions effect should only export functions that produce an effect a e g for a randomly generated a ffi should only export functions with a foreign import from corresponding js file text parsing should only export functions that produce a parser a m b
| 0
|
281,911
| 21,315,447,917
|
IssuesEvent
|
2022-04-16 07:29:47
|
kevinkuo0320/pe
|
https://api.github.com/repos/kevinkuo0320/pe
|
opened
|
Limited manual testing in appendix (DG)
|
severity.Low type.DocumentationBug
|
There is only limited instructions for manual testing on "add", "saving data". No instruction for "delete", "find" or "edit" in DG
<!--session: 1650088197498-5b3e47d4-0e05-4db3-ac78-21ea3c3920b4-->
<!--Version: Web v3.4.2-->
|
1.0
|
Limited manual testing in appendix (DG) - There is only limited instructions for manual testing on "add", "saving data". No instruction for "delete", "find" or "edit" in DG
<!--session: 1650088197498-5b3e47d4-0e05-4db3-ac78-21ea3c3920b4-->
<!--Version: Web v3.4.2-->
|
non_defect
|
limited manual testing in appendix dg there is only limited instructions for manual testing on add saving data no instruction for delete find or edit in dg
| 0
|
46,362
| 13,055,900,796
|
IssuesEvent
|
2020-07-30 03:03:59
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
clsim - hobo clsim build chokes doc build (Trac #1077)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1077
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:08",
"description": "clsim's dependency list should probably AND'd, not OR'd. \n\n{{{\n-- + clsim\n-- +-- python [symlinks] \n-- +-- Geant4 or OpenCL is not installed on your system. clsim will fail if it is not used with parameterizations. \n-- +-- numpy support (for tabulator) \n-- +-- safeprimes_base32.gz already downloaded \n-- +-- gmp support (make_safeprimes utility) \n-- +-- clsim-pybindings \n}}}\n\ngives:\n{{{\n[ 62%] Generating html from icetray-inspect of clsim\nIgnoring 'clsim': dlopen() dynamic loading error: /home/nega/i3/combo/build/lib/libclsim.so: undefined symbol: clCreateSubDevices/home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml:4: parser error : Opening and ending tag mismatch: project line 3 and icetray-inspect\n</icetray-inspect>\n ^\n/home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml:5: parser error : Premature end of data in tag icetray-inspect line 2\n\n^\nunable to parse /home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml\nmake[3]: *** [clsim/CMakeFiles/clsim-clsim-inspect] Error 6\nmake[2]: *** [clsim/CMakeFiles/clsim-clsim-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/docs.dir/rule] Error 2\nmake: *** [docs] Error 2\n \n~/i3/combo/build 1m 53s\n\u276f \n}}}",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"_ts": "1550067308113782",
"component": "combo simulation",
"summary": "clsim - hobo clsim build chokes doc build",
"priority": "normal",
"keywords": "clsim docs",
"time": "2015-07-30T04:18:17",
"milestone": "",
"owner": "claudio.kopper",
"type": "defect"
}
```
|
1.0
|
clsim - hobo clsim build chokes doc build (Trac #1077) - Migrated from https://code.icecube.wisc.edu/ticket/1077
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:08",
"description": "clsim's dependency list should probably AND'd, not OR'd. \n\n{{{\n-- + clsim\n-- +-- python [symlinks] \n-- +-- Geant4 or OpenCL is not installed on your system. clsim will fail if it is not used with parameterizations. \n-- +-- numpy support (for tabulator) \n-- +-- safeprimes_base32.gz already downloaded \n-- +-- gmp support (make_safeprimes utility) \n-- +-- clsim-pybindings \n}}}\n\ngives:\n{{{\n[ 62%] Generating html from icetray-inspect of clsim\nIgnoring 'clsim': dlopen() dynamic loading error: /home/nega/i3/combo/build/lib/libclsim.so: undefined symbol: clCreateSubDevices/home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml:4: parser error : Opening and ending tag mismatch: project line 3 and icetray-inspect\n</icetray-inspect>\n ^\n/home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml:5: parser error : Premature end of data in tag icetray-inspect line 2\n\n^\nunable to parse /home/nega/i3/combo/build/CMakeFiles/clsim-inspection.xml\nmake[3]: *** [clsim/CMakeFiles/clsim-clsim-inspect] Error 6\nmake[2]: *** [clsim/CMakeFiles/clsim-clsim-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/docs.dir/rule] Error 2\nmake: *** [docs] Error 2\n \n~/i3/combo/build 1m 53s\n\u276f \n}}}",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"_ts": "1550067308113782",
"component": "combo simulation",
"summary": "clsim - hobo clsim build chokes doc build",
"priority": "normal",
"keywords": "clsim docs",
"time": "2015-07-30T04:18:17",
"milestone": "",
"owner": "claudio.kopper",
"type": "defect"
}
```
|
defect
|
clsim hobo clsim build chokes doc build trac migrated from json status closed changetime description clsim s dependency list should probably and d not or d n n n clsim n python n or opencl is not installed on your system clsim will fail if it is not used with parameterizations n numpy support for tabulator n safeprimes gz already downloaded n gmp support make safeprimes utility n clsim pybindings n n ngives n n generating html from icetray inspect of clsim nignoring clsim dlopen dynamic loading error home nega combo build lib libclsim so undefined symbol clcreatesubdevices home nega combo build cmakefiles clsim inspection xml parser error opening and ending tag mismatch project line and icetray inspect n n n home nega combo build cmakefiles clsim inspection xml parser error premature end of data in tag icetray inspect line n n nunable to parse home nega combo build cmakefiles clsim inspection xml nmake error nmake error nmake error nmake error n n combo build n n reporter nega cc resolution worksforme ts component combo simulation summary clsim hobo clsim build chokes doc build priority normal keywords clsim docs time milestone owner claudio kopper type defect
| 1
|
405,655
| 27,528,161,055
|
IssuesEvent
|
2023-03-06 19:45:13
|
theta360developers/tasker-guide
|
https://api.github.com/repos/theta360developers/tasker-guide
|
closed
|
Delete theta360.guide icon (green circle) from list of icons at top of article
|
documentation enhancement
|
I think it's unclear what the theta360.guide icon (green circle) does in the list of icons at the top of the article. Please delete.
|
1.0
|
Delete theta360.guide icon (green circle) from list of icons at top of article - I think it's unclear what the theta360.guide icon (green circle) does in the list of icons at the top of the article. Please delete.
|
non_defect
|
delete guide icon green circle from list of icons at top of article i think it s unclear what the guide icon green circle does in the list of icons at the top of the article please delete
| 0
|
15,889
| 6,049,425,435
|
IssuesEvent
|
2017-06-12 18:43:54
|
MarlinFirmware/Marlin
|
https://api.github.com/repos/MarlinFirmware/Marlin
|
closed
|
1.1.2 Regression: 7kb additional size due to PROBING_HEATERS_OFF requires ADVANCED_PAUSE_FEATURE
|
Support: Building / Toolchain
|
Hello, since 1.1.2 I need ADVANCED_PAUSE_FEATURE for the PROBING_HEATERS_OFF, which consumes about 7kb extra. Imo this is a pretty big regression from 1.1.1 and there is no way to fit this onto my 128kb Melzi board.
edit1: There is another regression I am investigating, the size grew about 22kb even without adv_pause. I can't edit the topic, from 30kb to 7kb.
edit2: bisecting since "UBL Menu System 1.1" / 66db6c3 I went from being able to fit my config into 128kb (127504/98%) to a linker error region text overflowed by 6144 bytes. Will investigate where the size grew significantly.
edit3: 48f76521 added another ~3kb (2731) after some rambling (=> +500)
edit 4: ac959b1 adds another 1.5kb (1654), again after some rambling (+ 430), the rest is misconfiguration on my side (10.5kb) and some rambling after (1.5kb)
|
1.0
|
1.1.2 Regression: 7kb additional size due to PROBING_HEATERS_OFF requires ADVANCED_PAUSE_FEATURE - Hello, since 1.1.2 I need ADVANCED_PAUSE_FEATURE for the PROBING_HEATERS_OFF, which consumes about 7kb extra. Imo this is a pretty big regression from 1.1.1 and there is no way to fit this onto my 128kb Melzi board.
edit1: There is another regression I am investigating, the size grew about 22kb even without adv_pause. I can't edit the topic, from 30kb to 7kb.
edit2: bisecting since "UBL Menu System 1.1" / 66db6c3 I went from being able to fit my config into 128kb (127504/98%) to a linker error region text overflowed by 6144 bytes. Will investigate where the size grew significantly.
edit3: 48f76521 added another ~3kb (2731) after some rambling (=> +500)
edit 4: ac959b1 adds another 1.5kb (1654), again after some rambling (+ 430), the rest is misconfiguration on my side (10.5kb) and some rambling after (1.5kb)
|
non_defect
|
regression additional size due to probing heaters off requires advanced pause feature hello since i need advanced pause feature for the probing heaters off which consumes about extra imo this is a pretty big regression from and there is no way to fit this onto my melzi board there is another regression i am investigating the size grew about even without adv pause i can t edit the topic from to bisecting since ubl menu system i went from being able to fit my config into to a linker error region text overflowed by bytes will investigate where the size grew significantly added another after some rambling edit adds another again after some rambling the rest is misconfiguration on my side and some rambling after
| 0
|
171,650
| 20,984,873,271
|
IssuesEvent
|
2022-03-29 01:16:29
|
nlamirault/alan
|
https://api.github.com/repos/nlamirault/alan
|
closed
|
CVE-2021-33587 (High) detected in css-what-1.0.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-1.0.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-1.0.0.tgz">https://registry.npmjs.org/css-what/-/css-what-1.0.0.tgz</a></p>
<p>Path to dependency file: /vendor/github.com/hashicorp/vault/ui/package.json</p>
<p>Path to vulnerable library: /vendor/github.com/hashicorp/vault/ui/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-favicon-1.0.0-beta.4.tgz (Root Library)
- broccoli-favicon-1.0.0.tgz
- favicons-4.8.6.tgz
- cheerio-0.19.0.tgz
- css-select-1.0.0.tgz
- :x: **css-what-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nlamirault/alan/commit/9060713df80212ee5546b36d1083fb607520eb0b">9060713df80212ee5546b36d1083fb607520eb0b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (css-what): 5.0.1</p>
<p>Direct dependency fix Resolution (ember-cli-favicon): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33587 (High) detected in css-what-1.0.0.tgz - autoclosed - ## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-1.0.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-1.0.0.tgz">https://registry.npmjs.org/css-what/-/css-what-1.0.0.tgz</a></p>
<p>Path to dependency file: /vendor/github.com/hashicorp/vault/ui/package.json</p>
<p>Path to vulnerable library: /vendor/github.com/hashicorp/vault/ui/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-favicon-1.0.0-beta.4.tgz (Root Library)
- broccoli-favicon-1.0.0.tgz
- favicons-4.8.6.tgz
- cheerio-0.19.0.tgz
- css-select-1.0.0.tgz
- :x: **css-what-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nlamirault/alan/commit/9060713df80212ee5546b36d1083fb607520eb0b">9060713df80212ee5546b36d1083fb607520eb0b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (css-what): 5.0.1</p>
<p>Direct dependency fix Resolution (ember-cli-favicon): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in css what tgz autoclosed cve high severity vulnerability vulnerable library css what tgz a css selector parser library home page a href path to dependency file vendor github com hashicorp vault ui package json path to vulnerable library vendor github com hashicorp vault ui node modules css what package json dependency hierarchy ember cli favicon beta tgz root library broccoli favicon tgz favicons tgz cheerio tgz css select tgz x css what tgz vulnerable library found in head commit a href found in base branch master vulnerability details the css what package through for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what direct dependency fix resolution ember cli favicon step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.