Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,479
| 24,550,723,765
|
IssuesEvent
|
2022-10-12 12:24:40
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Eligibility test > The following changes needs to be done on the Eligibility questions screen
|
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
|
1. Remove the arrow marks present which are present next to the Data sharing options
2. AR: A participant is automatically navigating to the next screen after selecting any one of the options
ER: After selecting the option, the participant should stay on the same screen and participant should click on
next button to navigate to the next page

|
3.0
|
[iOS] Eligibility test > The following changes needs to be done on the Eligibility questions screen - 1. Remove the arrow marks present which are present next to the Data sharing options
2. AR: A participant is automatically navigating to the next screen after selecting any one of the options
ER: After selecting the option, the participant should stay on the same screen and participant should click on
next button to navigate to the next page

|
non_defect
|
eligibility test the following changes needs to be done on the eligibility questions screen remove the arrow marks present which are present next to the data sharing options ar a participant is automatically navigating to the next screen after selecting any one of the options er after selecting the option the participant should stay on the same screen and participant should click on next button to navigate to the next page
| 0
|
53,413
| 13,261,548,649
|
IssuesEvent
|
2020-08-20 20:06:00
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
[steamshovel] Save frame is broken if filters are on (Trac #1347)
|
Migrated from Trac combo core defect
|
Save frame does not work as intended when the frame stream is filtered.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1347">https://code.icecube.wisc.edu/projects/icecube/ticket/1347</a>, reported by hdembinskiand owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-09-17T19:04:23",
"_ts": "1442516663246308",
"description": "Save frame does not work as intended when the frame stream is filtered.",
"reporter": "hdembinski",
"cc": "dschultz",
"resolution": "fixed",
"time": "2015-09-15T19:36:54",
"component": "combo core",
"summary": "[steamshovel] Save frame is broken if filters are on",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[steamshovel] Save frame is broken if filters are on (Trac #1347) - Save frame does not work as intended when the frame stream is filtered.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1347">https://code.icecube.wisc.edu/projects/icecube/ticket/1347</a>, reported by hdembinskiand owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-09-17T19:04:23",
"_ts": "1442516663246308",
"description": "Save frame does not work as intended when the frame stream is filtered.",
"reporter": "hdembinski",
"cc": "dschultz",
"resolution": "fixed",
"time": "2015-09-15T19:36:54",
"component": "combo core",
"summary": "[steamshovel] Save frame is broken if filters are on",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
defect
|
save frame is broken if filters are on trac save frame does not work as intended when the frame stream is filtered migrated from json status closed changetime ts description save frame does not work as intended when the frame stream is filtered reporter hdembinski cc dschultz resolution fixed time component combo core summary save frame is broken if filters are on priority blocker keywords milestone owner hdembinski type defect
| 1
|
429,844
| 30,106,159,582
|
IssuesEvent
|
2023-06-30 01:36:11
|
romkey/give-me-a-sign
|
https://api.github.com/repos/romkey/give-me-a-sign
|
opened
|
Store DEBUG setting in Data
|
documentation
|
Currently DEBUG is hardcoded in the software; it should be stored in Data so that it can be remotely updated
|
1.0
|
Store DEBUG setting in Data - Currently DEBUG is hardcoded in the software; it should be stored in Data so that it can be remotely updated
|
non_defect
|
store debug setting in data currently debug is hardcoded in the software it should be stored in data so that it can be remotely updated
| 0
|
67,894
| 21,301,422,886
|
IssuesEvent
|
2022-04-15 04:03:24
|
klubcoin/lcn-mobile
|
https://api.github.com/repos/klubcoin/lcn-mobile
|
closed
|
[Klubcoin Partners] Fix fit Partner Images on image container at Klubcoin Partner Details Screen.
|
Onboarding and Authentication Services Defect Could Have Trivial Navigation / Drawer Services
|
### **Description:**
Fit Partner Images on image container at Klubcoin Partner Details Screen.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.1
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User already launched Klubcoin App
3. User has an existing Wallet Account that is not yet verified
### **Steps to Reproduce:**
1. Tap Get Started
2. Tap Check Klubcoin Partners
3. Select Any Partner
4. View Partner Details
### **Expected Result:**
Display Images fit on its container
### **Actual Result:**
Displaying empty spaces on left and right side of images
### **Attachment/s:**

|
1.0
|
[Klubcoin Partners] Fix fit Partner Images on image container at Klubcoin Partner Details Screen. - ### **Description:**
Fit Partner Images on image container at Klubcoin Partner Details Screen.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.1
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User already launched Klubcoin App
3. User has an existing Wallet Account that is not yet verified
### **Steps to Reproduce:**
1. Tap Get Started
2. Tap Check Klubcoin Partners
3. Select Any Partner
4. View Partner Details
### **Expected Result:**
Display Images fit on its container
### **Actual Result:**
Displaying empty spaces on left and right side of images
### **Attachment/s:**

|
defect
|
fix fit partner images on image container at klubcoin partner details screen description fit partner images on image container at klubcoin partner details screen build environment prod candidate environment affects version prod device platform android device os test device oneplus pro pre condition user successfully installed klubcoin app user already launched klubcoin app user has an existing wallet account that is not yet verified steps to reproduce tap get started tap check klubcoin partners select any partner view partner details expected result display images fit on its container actual result displaying empty spaces on left and right side of images attachment s
| 1
|
27,574
| 13,306,170,307
|
IssuesEvent
|
2020-08-25 19:47:08
|
yalelibrary/YUL-DC
|
https://api.github.com/repos/yalelibrary/YUL-DC
|
closed
|
SPIKE: convert persistence and load balancer scripts to CloudFormation
|
performance team
|
**ACCEPTANCE**
- [x] Figure out level of effort to convert the current build scripts to CloudFormation templates
- [x] Do a 1-2 hour review of other potential technologies (e.g. TerraForm) for managing infrastructure configurations
- [x] Recommend a direction for going forward (CloudFormation, TerraForm, Bash, other?).
|
True
|
SPIKE: convert persistence and load balancer scripts to CloudFormation - **ACCEPTANCE**
- [x] Figure out level of effort to convert the current build scripts to CloudFormation templates
- [x] Do a 1-2 hour review of other potential technologies (e.g. TerraForm) for managing infrastructure configurations
- [x] Recommend a direction for going forward (CloudFormation, TerraForm, Bash, other?).
|
non_defect
|
spike convert persistence and load balancer scripts to cloudformation acceptance figure out level of effort to convert the current build scripts to cloudformation templates do a hour review of other potential technologies e g terraform for managing infrastructure configurations recommend a direction for going forward cloudformation terraform bash other
| 0
|
29,920
| 11,786,082,989
|
IssuesEvent
|
2020-03-17 11:33:24
|
scriptex/atanas.info
|
https://api.github.com/repos/scriptex/atanas.info
|
closed
|
CVE-2020-7598 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-7598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/atanas.info/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/atanas.info/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- babel-loader-8.0.6.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/atanas.info/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/atanas.info/node_modules/optimist/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- webpack-spritesmith-1.1.0.tgz (Root Library)
- spritesheet-templates-10.5.0.tgz
- handlebars-4.7.3.tgz
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.7.tgz (Root Library)
- chokidar-2.1.8.tgz
- fsevents-1.2.11.tgz
- node-pre-gyp-0.14.0.tgz
- rc-1.2.8.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/atanas.info/commit/8ce79d0b9f2a35f1c6df375d78b08df90e76b51a">8ce79d0b9f2a35f1c6df375d78b08df90e76b51a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7598 (High) detected in multiple libraries - ## CVE-2020-7598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/atanas.info/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/atanas.info/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- babel-loader-8.0.6.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/atanas.info/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/atanas.info/node_modules/optimist/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- webpack-spritesmith-1.1.0.tgz (Root Library)
- spritesheet-templates-10.5.0.tgz
- handlebars-4.7.3.tgz
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.7.tgz (Root Library)
- chokidar-2.1.8.tgz
- fsevents-1.2.11.tgz
- node-pre-gyp-0.14.0.tgz
- rc-1.2.8.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/atanas.info/commit/8ce79d0b9f2a35f1c6df375d78b08df90e76b51a">8ce79d0b9f2a35f1c6df375d78b08df90e76b51a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file tmp ws scm atanas info package json path to vulnerable library tmp ws scm atanas info node modules mkdirp node modules minimist package json dependency hierarchy babel loader tgz root library mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tmp ws scm atanas info package json path to vulnerable library tmp ws scm atanas info node modules optimist node modules minimist package json dependency hierarchy webpack spritesmith tgz root library spritesheet templates tgz handlebars tgz optimist tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href dependency hierarchy browser sync tgz root library chokidar tgz fsevents tgz node pre gyp tgz rc tgz x minimist tgz vulnerable library found in head commit a href vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource
| 0
|
399,073
| 11,742,672,277
|
IssuesEvent
|
2020-03-12 01:39:51
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
Use aliases instead of aliasses
|
mailinglists priority: low technical change
|
In GitLab by @joren485 on May 27, 2019, 20:47
<!--
This template is for changes that do not affect the behaviour of the website.
** If you are not in the Technicie, there is a very high chance that you
should not use this template
Examples:
* Changes in CI
* Refactoring of code
* Technicie-facing documentation
-->
### One-sentence description
<!-- Please provide a brief description of the issue. Don't go into specifics. -->
Replace "aliasses" with "aliases" in the `mailinglists` app.
### Why?
<!-- Please motivate why we should invest into this change -->
"aliases" is the correct plural of "alias".
|
1.0
|
Use aliases instead of aliasses - In GitLab by @joren485 on May 27, 2019, 20:47
<!--
This template is for changes that do not affect the behaviour of the website.
** If you are not in the Technicie, there is a very high chance that you
should not use this template
Examples:
* Changes in CI
* Refactoring of code
* Technicie-facing documentation
-->
### One-sentence description
<!-- Please provide a brief description of the issue. Don't go into specifics. -->
Replace "aliasses" with "aliases" in the `mailinglists` app.
### Why?
<!-- Please motivate why we should invest into this change -->
"aliases" is the correct plural of "alias".
|
non_defect
|
use aliases instead of aliasses in gitlab by on may this template is for changes that do not affect the behaviour of the website if you are not in the technicie there is a very high chance that you should not use this template examples changes in ci refactoring of code technicie facing documentation one sentence description replace aliasses with aliases in the mailinglists app why aliases is the correct plural of alias
| 0
|
429,148
| 12,421,474,619
|
IssuesEvent
|
2020-05-23 17:02:25
|
roed314/seminars
|
https://api.github.com/repos/roed314/seminars
|
closed
|
Spacing above buttons on Account page
|
low priority
|
This is a very minor suggestion, but I think it might look slightly better if the vertical spacing above each of the buttons "Update details" and "Change password" was half as much.
|
1.0
|
Spacing above buttons on Account page - This is a very minor suggestion, but I think it might look slightly better if the vertical spacing above each of the buttons "Update details" and "Change password" was half as much.
|
non_defect
|
spacing above buttons on account page this is a very minor suggestion but i think it might look slightly better if the vertical spacing above each of the buttons update details and change password was half as much
| 0
|
239,873
| 19,975,123,279
|
IssuesEvent
|
2022-01-29 01:25:46
|
kevin-ghannoum/soen490
|
https://api.github.com/repos/kevin-ghannoum/soen490
|
closed
|
[Acceptance test] As a business owner, I want to view the logged hours/pay
|
acceptance test
|
Acceptance referring to this user story: #80
|
1.0
|
[Acceptance test] As a business owner, I want to view the logged hours/pay - Acceptance referring to this user story: #80
|
non_defect
|
as a business owner i want to view the logged hours pay acceptance referring to this user story
| 0
|
87,253
| 25,080,334,049
|
IssuesEvent
|
2022-11-07 18:43:58
|
xamarin/xamarin-android
|
https://api.github.com/repos/xamarin/xamarin-android
|
closed
|
Signing a release app with the same keystore differs in Visual Studio vs Android Studio
|
Area: App+Library Build needs-triage
|
### Android application type
Classic Xamarin.Android (MonoAndroid12.0, etc.)
### Affected platform version
VS 8.10.20 (build 0)
### Description
Trying to migrate to Android Studio native development of my app, and using the same keystore details as used in Visual Studio, how can the app signature be different ? This means users cannot update the app when development switches from VS to Android Studio.
### Steps to Reproduce
1.) create app, create keystore and sign the app
2.) create the same app in Android Studio, sign the app with gradle using same keystore/pass
3) app signatures are different.
### Did you find any workaround?
No
### Relevant log output
_No response_
|
1.0
|
Signing a release app with the same keystore differs in Visual Studio vs Android Studio - ### Android application type
Classic Xamarin.Android (MonoAndroid12.0, etc.)
### Affected platform version
VS 8.10.20 (build 0)
### Description
Trying to migrate to Android Studio native development of my app, and using the same keystore details as used in Visual Studio, how can the app signature be different ? This means users cannot update the app when development switches from VS to Android Studio.
### Steps to Reproduce
1.) create app, create keystore and sign the app
2.) create the same app in Android Studio, sign the app with gradle using same keystore/pass
3) app signatures are different.
### Did you find any workaround?
No
### Relevant log output
_No response_
|
non_defect
|
signing a release app with the same keystore differs in visual studio vs android studio android application type classic xamarin android etc affected platform version vs build description trying to migrate to android studio native development of my app and using the same keystore details as used in visual studio how can the app signature be different this means users cannot update the app when development switches from vs to android studio steps to reproduce create app create keystore and sign the app create the same app in android studio sign the app with gradle using same keystore pass app signatures are different did you find any workaround no relevant log output no response
| 0
|
23,188
| 3,775,191,372
|
IssuesEvent
|
2016-03-17 12:35:00
|
igagis/aumiks
|
https://api.github.com/repos/igagis/aumiks
|
closed
|
android
|
auto-migrated Priority-Medium Type-Defect
|
```
the project doesnt compile at all on android.
Include to ting are incomplete i believe.
regards
david
```
Original issue reported on code.google.com by `Davidgui...@gmail.com` on 12 Jun 2013 at 5:31
|
1.0
|
android - ```
the project doesnt compile at all on android.
Include to ting are incomplete i believe.
regards
david
```
Original issue reported on code.google.com by `Davidgui...@gmail.com` on 12 Jun 2013 at 5:31
|
defect
|
android the project doesnt compile at all on android include to ting are incomplete i believe regards david original issue reported on code google com by davidgui gmail com on jun at
| 1
|
101,867
| 8,806,555,194
|
IssuesEvent
|
2018-12-27 04:51:36
|
drussell1974/schemeofwork_web2py_app
|
https://api.github.com/repos/drussell1974/schemeofwork_web2py_app
|
closed
|
Selenium - edit existing Learning Episode
|
test
|
_check page elements_
- [x] title and headings
- [x] navigation
_edit_
- [x] create new
- [x] edit existing
- [x] submit invalid
- [x] submit valid
|
1.0
|
Selenium - edit existing Learning Episode - _check page elements_
- [x] title and headings
- [x] navigation
_edit_
- [x] create new
- [x] edit existing
- [x] submit invalid
- [x] submit valid
|
non_defect
|
selenium edit existing learning episode check page elements title and headings navigation edit create new edit existing submit invalid submit valid
| 0
|
23,953
| 3,874,868,800
|
IssuesEvent
|
2016-04-11 22:03:34
|
ariya/phantomjs
|
https://api.github.com/repos/ariya/phantomjs
|
closed
|
page.cookies not working in 1.6.0
|
old.Priority-Medium old.Status-New old.Type-Defect
|
_**[hel...@gmail.com](http://code.google.com/u/107647427892722145766/) commented:**_
> Running the following code (modified from tweet.js):
>
> page.open(encodeURI("http://mobile.twitter.com/baudehlo"), function (status) {
> // Check for page load success
> if (status !== "success") {
> console.log("Unable to access network");
> } else {
> console.log("Cookies: " + JSON.stringify(page.cookies));
> console.log("Document.cookie: " + page.evaluate(function () { return document.cookie}));
> }
> phantom.exit();
> });
>
> The page.cookies output should equal the document.cookie output. Instead we get:
>
> Cookies: []
> Document.cookie: k=208.124.218.243.1340643756484334; guest_id=v1%3A134064375648874627; _mobile_sess=BAh7BzoQX2NzcmZfdG9rZW4iGTM2NmE1NjcyMTNlODBkZWFiNjYyOg9zZXNzaW9uX2lkIiU5NGQ2OTdjNDdhNWE4MjA3Yzg4MDI1MTk3N2JlMWU2NQ%3D%3D--879e0d525ee6b43ebfcad5355f191a05f7d7b96e
>
> Tested on Mac OS X, binary distribution.
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #616](http://code.google.com/p/phantomjs/issues/detail?id=616).
:star2: **7** people had starred this issue at the time of migration.
|
1.0
|
page.cookies not working in 1.6.0 - _**[hel...@gmail.com](http://code.google.com/u/107647427892722145766/) commented:**_
> Running the following code (modified from tweet.js):
>
> page.open(encodeURI("http://mobile.twitter.com/baudehlo"), function (status) {
> // Check for page load success
> if (status !== "success") {
> console.log("Unable to access network");
> } else {
> console.log("Cookies: " + JSON.stringify(page.cookies));
> console.log("Document.cookie: " + page.evaluate(function () { return document.cookie}));
> }
> phantom.exit();
> });
>
> The page.cookies output should equal the document.cookie output. Instead we get:
>
> Cookies: []
> Document.cookie: k=208.124.218.243.1340643756484334; guest_id=v1%3A134064375648874627; _mobile_sess=BAh7BzoQX2NzcmZfdG9rZW4iGTM2NmE1NjcyMTNlODBkZWFiNjYyOg9zZXNzaW9uX2lkIiU5NGQ2OTdjNDdhNWE4MjA3Yzg4MDI1MTk3N2JlMWU2NQ%3D%3D--879e0d525ee6b43ebfcad5355f191a05f7d7b96e
>
> Tested on Mac OS X, binary distribution.
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #616](http://code.google.com/p/phantomjs/issues/detail?id=616).
:star2: **7** people had starred this issue at the time of migration.
|
defect
|
page cookies not working in commented running the following code modified from tweet js page open encodeuri quot function status check for page load success if status quot success quot console log quot unable to access network quot else console log quot cookies quot json stringify page cookies console log quot document cookie quot page evaluate function return document cookie phantom exit the page cookies output should equal the document cookie output instead we get cookies document cookie k guest id mobile sess tested on mac os x binary distribution disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
| 1
|
24,268
| 3,946,755,510
|
IssuesEvent
|
2016-04-28 06:47:14
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
UserWarning: indices array has non-integer dtype (uint64)
|
defect scipy.sparse
|
I'd guess, that unsigned types are integers to ;-)
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py:119: UserWarning: indptr array has non-integer dtype (uint64)
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py:122: UserWarning: indices array has non-integer dtype (uint64)
Obviously I am generating a sparse matrix using 'indptr' and 'indices' of dtype 'uint64'. It is just a warning, but nevertheless it bothers me.
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py
Line 116-122:
# index arrays should have integer data types
if self.indptr.dtype.kind != 'i':
warn("indptr array has non-integer dtype (%s)"
% self.indptr.dtype.name)
if self.indices.dtype.kind != 'i':
warn("indices array has non-integer dtype (%s)"
% self.indices.dtype.name)
It would be more logical to issue a warning when those are signed types, because indices can't be negative. At least 'u' should be added to that validation.
|
1.0
|
UserWarning: indices array has non-integer dtype (uint64) - I'd guess, that unsigned types are integers to ;-)
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py:119: UserWarning: indptr array has non-integer dtype (uint64)
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py:122: UserWarning: indices array has non-integer dtype (uint64)
Obviously I am generating a sparse matrix using 'indptr' and 'indices' of dtype 'uint64'. It is just a warning, but nevertheless it bothers me.
...\scipy-0.13.2.win-amd64-py2.7.egg\scipy\sparse\compressed.py
Line 116-122:
# index arrays should have integer data types
if self.indptr.dtype.kind != 'i':
warn("indptr array has non-integer dtype (%s)"
% self.indptr.dtype.name)
if self.indices.dtype.kind != 'i':
warn("indices array has non-integer dtype (%s)"
% self.indices.dtype.name)
It would be more logical to issue a warning when those are signed types, because indices can't be negative. At least 'u' should be added to that validation.
|
defect
|
userwarning indices array has non integer dtype i d guess that unsigned types are integers to scipy win egg scipy sparse compressed py userwarning indptr array has non integer dtype scipy win egg scipy sparse compressed py userwarning indices array has non integer dtype obviously i am generating a sparse matrix using indptr and indices of dtype it is just a warning but nevertheless it bothers me scipy win egg scipy sparse compressed py line index arrays should have integer data types if self indptr dtype kind i warn indptr array has non integer dtype s self indptr dtype name if self indices dtype kind i warn indices array has non integer dtype s self indices dtype name it would be more logical to issue a warning when those are signed types because indices can t be negative at least u should be added to that validation
| 1
|
186,570
| 14,399,231,720
|
IssuesEvent
|
2020-12-03 10:38:34
|
ethereum/solidity
|
https://api.github.com/repos/ethereum/solidity
|
closed
|
yulInterpreter crashes on infinite recursion
|
bug :bug: should compile without error testing :hammer:
|
Found by ossfuzz (13647 and 13811)
```
{
function f() {
f()
}
f()
}
```
The problem manifests in `Interpreter::openScope` in `test/tools/yulInterpreter/Interpreter.h`.
|
1.0
|
yulInterpreter crashes on infinite recursion - Found by ossfuzz (13647 and 13811)
```
{
function f() {
f()
}
f()
}
```
The problem manifests in `Interpreter::openScope` in `test/tools/yulInterpreter/Interpreter.h`.
|
non_defect
|
yulinterpreter crashes on infinite recursion found by ossfuzz and function f f f the problem manifests in interpreter openscope in test tools yulinterpreter interpreter h
| 0
|
1,591
| 2,649,223,792
|
IssuesEvent
|
2015-03-14 18:08:38
|
jquery/jquery-mobile
|
https://api.github.com/repos/jquery/jquery-mobile
|
closed
|
hoverDelay setting doesn't do anything
|
Remove deprecated code
|
```buttonMarkup.hoverDelay``` is documented at http://api.jquerymobile.com/global-config/ and it used to perform as documented in jQuery Mobile 1.3. However, in 1.4 it has no effect. The only place ```hoverDelay``` appears in the 1.4 code is:
buttonMarkup: {
hoverDelay: 200
},
So I guess either the code that uses ```hoverDelay``` needs to be added back in, or the setting needs to be removed entirely (and the API docs updated accordingly).
On a related note: The API docs mention that ```buttonMarkup.hoverDelay``` is deprecated and recommend using ```$.mobile.hoverDelay``` instead. However ```$.mobile.hoverDelay``` isn't in the code at all, and therefore obviously doesn't do anything either.
|
1.0
|
hoverDelay setting doesn't do anything - ```buttonMarkup.hoverDelay``` is documented at http://api.jquerymobile.com/global-config/ and it used to perform as documented in jQuery Mobile 1.3. However, in 1.4 it has no effect. The only place ```hoverDelay``` appears in the 1.4 code is:
buttonMarkup: {
hoverDelay: 200
},
So I guess either the code that uses ```hoverDelay``` needs to be added back in, or the setting needs to be removed entirely (and the API docs updated accordingly).
On a related note: The API docs mention that ```buttonMarkup.hoverDelay``` is deprecated and recommend using ```$.mobile.hoverDelay``` instead. However ```$.mobile.hoverDelay``` isn't in the code at all, and therefore obviously doesn't do anything either.
|
non_defect
|
hoverdelay setting doesn t do anything buttonmarkup hoverdelay is documented at and it used to perform as documented in jquery mobile however in it has no effect the only place hoverdelay appears in the code is buttonmarkup hoverdelay so i guess either the code that uses hoverdelay needs to be added back in or the setting needs to be removed entirely and the api docs updated accordingly on a related note the api docs mention that buttonmarkup hoverdelay is deprecated and recommend using mobile hoverdelay instead however mobile hoverdelay isn t in the code at all and therefore obviously doesn t do anything either
| 0
|
7,451
| 3,975,917,891
|
IssuesEvent
|
2016-05-05 08:49:26
|
FakeItEasy/FakeItEasy
|
https://api.github.com/repos/FakeItEasy/FakeItEasy
|
opened
|
Add an API approval test
|
build in-progress P2
|
Using https://github.com/approvals/ApprovalTests.Net and https://www.nuget.org/packages/PublicApiGenerator/
This will allow us to see the effect of any change to the public API.
Initially, at least, I don't think we should include this in the default rake tasks, since the management of this test will be a little different to usual and I think it will add unwanted friction for contributors. We can still eyeball PR as we do know to spot breaking changes, and if unsure, we can pull the changes down locally and run the approval test (`bundle exec rake approve`). We also have the option of setting up a second build config which runs this test and make it a "not required" check in GitHub for merging a PR. I.e. if a contributor changes the public API in a non-breaking way, we can eyeball the results of the failed build and verify that, merge the PR, and take care of updating the test later.
|
1.0
|
Add an API approval test - Using https://github.com/approvals/ApprovalTests.Net and https://www.nuget.org/packages/PublicApiGenerator/
This will allow us to see the effect of any change to the public API.
Initially, at least, I don't think we should include this in the default rake tasks, since the management of this test will be a little different to usual and I think it will add unwanted friction for contributors. We can still eyeball PR as we do know to spot breaking changes, and if unsure, we can pull the changes down locally and run the approval test (`bundle exec rake approve`). We also have the option of setting up a second build config which runs this test and make it a "not required" check in GitHub for merging a PR. I.e. if a contributor changes the public API in a non-breaking way, we can eyeball the results of the failed build and verify that, merge the PR, and take care of updating the test later.
|
non_defect
|
add an api approval test using and this will allow us to see the effect of any change to the public api initially at least i don t think we should include this in the default rake tasks since the management of this test will be a little different to usual and i think it will add unwanted friction for contributors we can still eyeball pr as we do know to spot breaking changes and if unsure we can pull the changes down locally and run the approval test bundle exec rake approve we also have the option of setting up a second build config which runs this test and make it a not required check in github for merging a pr i e if a contributor changes the public api in a non breaking way we can eyeball the results of the failed build and verify that merge the pr and take care of updating the test later
| 0
|
54,780
| 13,925,989,654
|
IssuesEvent
|
2020-10-21 17:38:07
|
AlfrescoLabs/alfresco-environment-validation
|
https://api.github.com/repos/AlfrescoLabs/alfresco-environment-validation
|
reopened
|
EVT should validate that the database can accept at least 300 connections
|
Priority-High Type-Defect auto-migrated
|
```
A single Alfresco cluster node is capable of concurrently requiring at least
275 database connections, and therefore the EVT should validate that the
database supports at least that many concurrent connections.
Note: relates to https://issues.alfresco.com/jira/browse/MNT-9899
```
Original issue reported on code.google.com by `peter.mo...@alfresco.com` on 20 May 2014 at 4:58
|
1.0
|
EVT should validate that the database can accept at least 300 connections - ```
A single Alfresco cluster node is capable of concurrently requiring at least
275 database connections, and therefore the EVT should validate that the
database supports at least that many concurrent connections.
Note: relates to https://issues.alfresco.com/jira/browse/MNT-9899
```
Original issue reported on code.google.com by `peter.mo...@alfresco.com` on 20 May 2014 at 4:58
|
defect
|
evt should validate that the database can accept at least connections a single alfresco cluster node is capable of concurrently requiring at least database connections and therefore the evt should validate that the database supports at least that many concurrent connections note relates to original issue reported on code google com by peter mo alfresco com on may at
| 1
|
19,379
| 6,718,370,829
|
IssuesEvent
|
2017-10-15 12:01:34
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
Cmake Issuse
|
type: build type: enhancement type: question
|
Hello,
I did everything according to the instructions and gives me a problem when cmake.
>
> "CMake Error at C:/Program Files/CMake/share/cmake-3.7/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
> Could NOT find OpenSSL (missing: OPENSSL_LIBRARIES OPENSSL_INCLUDE_DIR)
> Call Stack (most recent call first):"
Even a little confused and maybe someone hovers me what I have to do to solve the problem?
|
1.0
|
Cmake Issuse - Hello,
I did everything according to the instructions and gives me a problem when cmake.
>
> "CMake Error at C:/Program Files/CMake/share/cmake-3.7/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
> Could NOT find OpenSSL (missing: OPENSSL_LIBRARIES OPENSSL_INCLUDE_DIR)
> Call Stack (most recent call first):"
Even a little confused and maybe someone hovers me what I have to do to solve the problem?
|
non_defect
|
cmake issuse hello i did everything according to the instructions and gives me a problem when cmake cmake error at c program files cmake share cmake modules findpackagehandlestandardargs cmake message could not find openssl missing openssl libraries openssl include dir call stack most recent call first even a little confused and maybe someone hovers me what i have to do to solve the problem
| 0
|
11,898
| 9,488,990,476
|
IssuesEvent
|
2019-04-22 21:05:55
|
internetarchive/fatcat
|
https://api.github.com/repos/internetarchive/fatcat
|
closed
|
Example of a (trivial) editgroup review bot
|
infrastructure
|
Something in python that looks at submitted editgroups and annotates them (pass/fail) if appropriate.
API support should already exist; part of this task is to flush out any missing features.
|
1.0
|
Example of a (trivial) editgroup review bot - Something in python that looks at submitted editgroups and annotates them (pass/fail) if appropriate.
API support should already exist; part of this task is to flush out any missing features.
|
non_defect
|
example of a trivial editgroup review bot something in python that looks at submitted editgroups and annotates them pass fail if appropriate api support should already exist part of this task is to flush out any missing features
| 0
|
185,527
| 15,024,601,277
|
IssuesEvent
|
2021-02-01 19:52:27
|
fergiemcdowall/search-index
|
https://api.github.com/repos/fergiemcdowall/search-index
|
closed
|
Fix `docs/examples`
|
documentation
|
Fix all the code examples to work with the newest version of search-index (v2.1.0). Valuable to get people going I think.
|
1.0
|
Fix `docs/examples` - Fix all the code examples to work with the newest version of search-index (v2.1.0). Valuable to get people going I think.
|
non_defect
|
fix docs examples fix all the code examples to work with the newest version of search index valuable to get people going i think
| 0
|
532,169
| 15,530,942,056
|
IssuesEvent
|
2021-03-13 21:12:58
|
eclipse-ee4j/cargotracker
|
https://api.github.com/repos/eclipse-ee4j/cargotracker
|
closed
|
Provide more details when 'registration failed'
|
Priority: Minor enhancement good first issue help wanted
|
When logging an event failed, the only message sent to the user is "registration failed". Is this failure due a domain issue or to a technical issue?
User need to get more useful feedback.
|
1.0
|
Provide more details when 'registration failed' - When logging an event failed, the only message sent to the user is "registration failed". Is this failure due a domain issue or to a technical issue?
User need to get more useful feedback.
|
non_defect
|
provide more details when registration failed when logging an event failed the only message sent to the user is registration failed is this failure due a domain issue or to a technical issue user need to get more useful feedback
| 0
|
42,388
| 11,011,221,684
|
IssuesEvent
|
2019-12-04 15:55:44
|
contao/contao
|
https://api.github.com/repos/contao/contao
|
closed
|
Unterscheidung Besucher und Mitglied im Hilfe-Assistent Seitentyp
|
defect
|
Im Hilfe-Assistenten zum Feld Seitentyp werden derzeit die Begriffe Besucher (visitor) und Benutzer (user) verwendet.
Der Begriff Benutzer (user) wird jedoch zum einen auch in Fällen verwendet, wo eigentlich Besucher im Allgemeinen gemeint sind, und in den anderen Fällen wäre es sinnvoll, diesen durch den spezifischeren Begriff Mitglied (member) zu ersetzen.
Page type | Description
-- | --
Regular page | A regular page contains articles and content elements. It is the default page type.
Internal redirect | This type of page automatically forwards visitors to another page within the site structure.
External redirect | This type of page automatically redirects visitors to an external website. It works like a hyperlink.
Website root | This type of page marks the starting point of a new website within the site structure.
Logout | This type of page automatically logs out the ~~user~~ _member_.
401 Not authenticated | If a ~~user~~ _visitor_ requests a protected page without being authenticated, a 401 error page will be loaded instead.
403 Access denied | If a ~~user~~ _member_ requests a protected page without permission, a 403 error page will be loaded instead.
404 Page not found | If a ~~user~~ _visitor_ requests a non-existent page, a 404 error page will be loaded instead.
|
1.0
|
Unterscheidung Besucher und Mitglied im Hilfe-Assistent Seitentyp - Im Hilfe-Assistenten zum Feld Seitentyp werden derzeit die Begriffe Besucher (visitor) und Benutzer (user) verwendet.
Der Begriff Benutzer (user) wird jedoch zum einen auch in Fällen verwendet, wo eigentlich Besucher im Allgemeinen gemeint sind, und in den anderen Fällen wäre es sinnvoll, diesen durch den spezifischeren Begriff Mitglied (member) zu ersetzen.
Page type | Description
-- | --
Regular page | A regular page contains articles and content elements. It is the default page type.
Internal redirect | This type of page automatically forwards visitors to another page within the site structure.
External redirect | This type of page automatically redirects visitors to an external website. It works like a hyperlink.
Website root | This type of page marks the starting point of a new website within the site structure.
Logout | This type of page automatically logs out the ~~user~~ _member_.
401 Not authenticated | If a ~~user~~ _visitor_ requests a protected page without being authenticated, a 401 error page will be loaded instead.
403 Access denied | If a ~~user~~ _member_ requests a protected page without permission, a 403 error page will be loaded instead.
404 Page not found | If a ~~user~~ _visitor_ requests a non-existent page, a 404 error page will be loaded instead.
|
defect
|
unterscheidung besucher und mitglied im hilfe assistent seitentyp im hilfe assistenten zum feld seitentyp werden derzeit die begriffe besucher visitor und benutzer user verwendet der begriff benutzer user wird jedoch zum einen auch in fällen verwendet wo eigentlich besucher im allgemeinen gemeint sind und in den anderen fällen wäre es sinnvoll diesen durch den spezifischeren begriff mitglied member zu ersetzen page type description regular page a regular page contains articles and content elements it is the default page type internal redirect this type of page automatically forwards visitors to another page within the site structure external redirect this type of page automatically redirects visitors to an external website it works like a hyperlink website root this type of page marks the starting point of a new website within the site structure logout this type of page automatically logs out the user member not authenticated if a user visitor requests a protected page without being authenticated a error page will be loaded instead access denied if a user member requests a protected page without permission a error page will be loaded instead page not found if a user visitor requests a non existent page a error page will be loaded instead
| 1
|
711,613
| 24,469,647,292
|
IssuesEvent
|
2022-10-07 18:26:51
|
containrrr/watchtower
|
https://api.github.com/repos/containrrr/watchtower
|
opened
|
Watchtower updates even though a monitor is set
|
Type: Bug Priority: Medium Status: Available
|
### Describe the bug
I set in the container under labels:
- com.centurylinklabs.watchtower.monitor-only="true"
but Watchtower continues to update.
### Steps to reproduce
1.Definition under container: - com.centurylinklabs.watchtower.monitor-only="true"
### Expected behavior
Just a notification and not an update to the container
### Screenshots
_No response_
### Environment
- Docker Version
### Your logs
```text
There is no log
```
### Additional context
_No response_
|
1.0
|
Watchtower updates even though a monitor is set - ### Describe the bug
I set in the container under labels:
- com.centurylinklabs.watchtower.monitor-only="true"
but Watchtower continues to update.
### Steps to reproduce
1.Definition under container: - com.centurylinklabs.watchtower.monitor-only="true"
### Expected behavior
Just a notification and not an update to the container
### Screenshots
_No response_
### Environment
- Docker Version
### Your logs
```text
There is no log
```
### Additional context
_No response_
|
non_defect
|
watchtower updates even though a monitor is set describe the bug i set in the container under labels com centurylinklabs watchtower monitor only true but watchtower continues to update steps to reproduce definition under container com centurylinklabs watchtower monitor only true expected behavior just a notification and not an update to the container screenshots no response environment docker version your logs text there is no log additional context no response
| 0
|
23,690
| 3,851,865,585
|
IssuesEvent
|
2016-04-06 05:27:56
|
GPF/imame4all
|
https://api.github.com/repos/GPF/imame4all
|
closed
|
ipad 4 with mame4all
|
auto-migrated Priority-Medium Type-Defect
|
```
mk games runs full speed on ipad 4 but not full speed on android way android
hes beter hardware
```
Original issue reported on code.google.com by `markocur...@gmail.com` on 18 Nov 2012 at 1:30
|
1.0
|
ipad 4 with mame4all - ```
mk games runs full speed on ipad 4 but not full speed on android way android
hes beter hardware
```
Original issue reported on code.google.com by `markocur...@gmail.com` on 18 Nov 2012 at 1:30
|
defect
|
ipad with mk games runs full speed on ipad but not full speed on android way android hes beter hardware original issue reported on code google com by markocur gmail com on nov at
| 1
|
38,853
| 8,972,831,098
|
IssuesEvent
|
2019-01-29 19:20:47
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
opened
|
Ribbon: shadow using unexpected color
|
Color Schemes Components [Type] Defect
|
<!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Starting at URL:https://wpcalypso.wordpress.com/devdocs/design/ribbon
2. Notice that the shadow of the ribbon is blue
It should be using the accent color for the shadow, probably --color-accent-dark.
#### Screenshot / Video

|
1.0
|
Ribbon: shadow using unexpected color - <!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Starting at URL:https://wpcalypso.wordpress.com/devdocs/design/ribbon
2. Notice that the shadow of the ribbon is blue
It should be using the accent color for the shadow, probably --color-accent-dark.
#### Screenshot / Video

|
defect
|
ribbon shadow using unexpected color steps to reproduce starting at url notice that the shadow of the ribbon is blue it should be using the accent color for the shadow probably color accent dark screenshot video
| 1
|
319,362
| 9,742,787,018
|
IssuesEvent
|
2019-06-02 20:05:06
|
semperfiwebdesign/all-in-one-seo-pack
|
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
|
opened
|
PHP Warning: count(): Parameter must be an array or an object that implements Countable in wp-includes\post-template.php on line 293
|
Needs Reproducing Priority | High
|
https://pastebin.com/fEg8pu5R
|
1.0
|
PHP Warning: count(): Parameter must be an array or an object that implements Countable in wp-includes\post-template.php on line 293 - https://pastebin.com/fEg8pu5R
|
non_defect
|
php warning count parameter must be an array or an object that implements countable in wp includes post template php on line
| 0
|
308,283
| 9,437,336,594
|
IssuesEvent
|
2019-04-13 14:32:33
|
cs2103-ay1819s2-w13-2/main
|
https://api.github.com/repos/cs2103-ay1819s2-w13-2/main
|
closed
|
As a CCA main committee member, I want to list the activities
|
priority.High type.Story
|
so that I can see what are the activities
|
1.0
|
As a CCA main committee member, I want to list the activities - so that I can see what are the activities
|
non_defect
|
as a cca main committee member i want to list the activities so that i can see what are the activities
| 0
|
60,442
| 17,023,426,146
|
IssuesEvent
|
2021-07-03 01:58:18
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Language of blog comment notification
|
Component: website Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 1.38pm, Wednesday, 17th June 2009]**
The subject of diary comment notifications should be in the language of the blog post, not the language of the commentor's default language. Steps to reproduce:
1) Post a diary entry in English
2) A second user (with default language=Spanish) posts a comment on the diary entry
3) The notification has a subject of (eg) "[OpenStreetMap] PerroVerd ha comentado en tu entrada de diario"
The rest of the email is in English, which presumably means it isn't being translated.
|
1.0
|
Language of blog comment notification - **[Submitted to the original trac issue database at 1.38pm, Wednesday, 17th June 2009]**
The subject of diary comment notifications should be in the language of the blog post, not the language of the commentor's default language. Steps to reproduce:
1) Post a diary entry in English
2) A second user (with default language=Spanish) posts a comment on the diary entry
3) The notification has a subject of (eg) "[OpenStreetMap] PerroVerd ha comentado en tu entrada de diario"
The rest of the email is in English, which presumably means it isn't being translated.
|
defect
|
language of blog comment notification the subject of diary comment notifications should be in the language of the blog post not the language of the commentor s default language steps to reproduce post a diary entry in english a second user with default language spanish posts a comment on the diary entry the notification has a subject of eg perroverd ha comentado en tu entrada de diario the rest of the email is in english which presumably means it isn t being translated
| 1
|
216,886
| 16,673,060,452
|
IssuesEvent
|
2021-06-07 13:18:02
|
brotkrueml/schema
|
https://api.github.com/repos/brotkrueml/schema
|
closed
|
Add node identifier view helpers
|
documentation feature
|
With the implementation of node identifiers (#65) nodes consisting only of the id keyword can be constructed programmatically. This should also be possible within a template using the view helpers.
- [x] A NodeIdentifierViewHelper is available.
- [x] A BlankNodeIdentifierViewHelper is available.
- [x] The view helpers can be assigned to every property in a type view helper.
- [x] In the rendered JSON-LD the node of the property consists only of the id keyword.
- [x] The usage is described in the documentation
|
1.0
|
Add node identifier view helpers - With the implementation of node identifiers (#65) nodes consisting only of the id keyword can be constructed programmatically. This should also be possible within a template using the view helpers.
- [x] A NodeIdentifierViewHelper is available.
- [x] A BlankNodeIdentifierViewHelper is available.
- [x] The view helpers can be assigned to every property in a type view helper.
- [x] In the rendered JSON-LD the node of the property consists only of the id keyword.
- [x] The usage is described in the documentation
|
non_defect
|
add node identifier view helpers with the implementation of node identifiers nodes consisting only of the id keyword can be constructed programmatically this should also be possible within a template using the view helpers a nodeidentifierviewhelper is available a blanknodeidentifierviewhelper is available the view helpers can be assigned to every property in a type view helper in the rendered json ld the node of the property consists only of the id keyword the usage is described in the documentation
| 0
|
278,364
| 30,702,288,617
|
IssuesEvent
|
2023-07-27 01:17:46
|
Trinadh465/linux-4.1.15_CVE-2023-28772
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-28772
|
closed
|
CVE-2020-12770 (Medium) detected in linuxlinux-4.6 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2020-12770 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-28772/commit/943a37114977025aa089143316b489c8146cc673">943a37114977025aa089143316b489c8146cc673</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.6.11. sg_write lacks an sg_remove_request call in a certain failure case, aka CID-83c6f2390040.
<p>Publish Date: 2020-05-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12770>CVE-2020-12770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770</a></p>
<p>Release Date: 2020-07-29</p>
<p>Fix Resolution: v5.7-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-12770 (Medium) detected in linuxlinux-4.6 - autoclosed - ## CVE-2020-12770 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-28772/commit/943a37114977025aa089143316b489c8146cc673">943a37114977025aa089143316b489c8146cc673</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.6.11. sg_write lacks an sg_remove_request call in a certain failure case, aka CID-83c6f2390040.
<p>Publish Date: 2020-05-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12770>CVE-2020-12770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770</a></p>
<p>Release Date: 2020-07-29</p>
<p>Fix Resolution: v5.7-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers scsi sg c drivers scsi sg c vulnerability details an issue was discovered in the linux kernel through sg write lacks an sg remove request call in a certain failure case aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
25,239
| 12,229,009,778
|
IssuesEvent
|
2020-05-03 21:59:19
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
Improve subscription creation methods
|
Client Service Bus
|
Currently, we can create a sender from one method that has a queueOrTopicName parameter.
CreateSender(string queueOrTopicName).
We then have CreateReceiver/Processor methods that expect either a queue, or a combination of topic and subscription names. In UX studies, most users ended up supplying a topicName in the queueName overload, and not including the subscription. This is probably at least in part due to the asymmetry with the way the CreateSender method works (it takes either queue or topic in one param).
One idea would be to have Create(Subscription|Topic)Receiver/Processor methods instead of relying on overloads in CreateReceiver/Processor.
|
1.0
|
Improve subscription creation methods - Currently, we can create a sender from one method that has a queueOrTopicName parameter.
CreateSender(string queueOrTopicName).
We then have CreateReceiver/Processor methods that expect either a queue, or a combination of topic and subscription names. In UX studies, most users ended up supplying a topicName in the queueName overload, and not including the subscription. This is probably at least in part due to the asymmetry with the way the CreateSender method works (it takes either queue or topic in one param).
One idea would be to have Create(Subscription|Topic)Receiver/Processor methods instead of relying on overloads in CreateReceiver/Processor.
|
non_defect
|
improve subscription creation methods currently we can create a sender from one method that has a queueortopicname parameter createsender string queueortopicname we then have createreceiver processor methods that expect either a queue or a combination of topic and subscription names in ux studies most users ended up supplying a topicname in the queuename overload and not including the subscription this is probably at least in part due to the asymmetry with the way the createsender method works it takes either queue or topic in one param one idea would be to have create subscription topic receiver processor methods instead of relying on overloads in createreceiver processor
| 0
|
29,746
| 5,868,042,535
|
IssuesEvent
|
2017-05-14 08:37:30
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Bridge 16-beta fails to load in Internet Explorer 11
|
defect in progress
|
In IE11, Bridge 16 beta crashes at startup (I've not tried any other IE versions, IE11 is the only version we're interested in).
### Steps To Reproduce
- Load https://deck.net/ in Internet Explorer
The error is:
```text
Cannot modify non-writable property 'name'
```
It occurs on this code:
```js
if (typeof member === "function" && name !== "$main") {
Object.defineProperty(member, isFF ? "displayName" : "name", { value: className + "." + name, writable: true });
}
```
### See Also
* https://forums.bridge.net/forum/bridge-net-pro/bugs/4171-blackberry-10-browser-bridge-js
|
1.0
|
Bridge 16-beta fails to load in Internet Explorer 11 - In IE11, Bridge 16 beta crashes at startup (I've not tried any other IE versions, IE11 is the only version we're interested in).
### Steps To Reproduce
- Load https://deck.net/ in Internet Explorer
The error is:
```text
Cannot modify non-writable property 'name'
```
It occurs on this code:
```js
if (typeof member === "function" && name !== "$main") {
Object.defineProperty(member, isFF ? "displayName" : "name", { value: className + "." + name, writable: true });
}
```
### See Also
* https://forums.bridge.net/forum/bridge-net-pro/bugs/4171-blackberry-10-browser-bridge-js
|
defect
|
bridge beta fails to load in internet explorer in bridge beta crashes at startup i ve not tried any other ie versions is the only version we re interested in steps to reproduce load in internet explorer the error is text cannot modify non writable property name it occurs on this code js if typeof member function name main object defineproperty member isff displayname name value classname name writable true see also
| 1
|
79,704
| 28,497,719,345
|
IssuesEvent
|
2023-04-18 15:13:00
|
vector-im/element-desktop
|
https://api.github.com/repos/vector-im/element-desktop
|
closed
|
Unable to launch Element in a macOS virtual machine
|
T-Defect
|
### Steps to reproduce
1. Set up a macOS Monterey or Ventura guest virtual machine on a macOS host with an M1 processor
2. Download Element.dmg from element.io and install
3. Run Element.app
### Outcome
#### What did you expect?
Expected the Element.app to start
#### What happened instead?
The application crashes on launch and produced a crash log:
```
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000011475d54c
Termination Reason: Namespace SIGNAL, Code 5 Trace/BPT trap: 5
Terminating Process: exc handler [956]
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 Electron Framework 0x11475d54c v8::internal::wasm::WasmCodeManager::WasmCodeManager() + 92
1 Electron Framework 0x11475d538 v8::internal::wasm::WasmCodeManager::WasmCodeManager() + 72
2 Electron Framework 0x11476ce94 v8::internal::wasm::WasmEngine::InitializeOncePerProcess() + 44
3 Electron Framework 0x114360068 v8::internal::IsolateAllocator::GetPtrComprCage() const + 2188
4 Electron Framework 0x114097684 v8::V8::Initialize(int) + 32
5 Electron Framework 0x116f776cc v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 37404432
6 Electron Framework 0x116f75924 v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 37396840
7 Electron Framework 0x11338aa04 v8::Signature::New(v8::Isolate*, v8::Local<v8::FunctionTemplate>) + 10416
8 Electron Framework 0x11338a7cc v8::Signature::New(v8::Isolate*, v8::Local<v8::FunctionTemplate>) + 9848
9 Electron Framework 0x1133765a4 v8::internal::compiler::RawMachineAssembler::TargetParameter() + 8816
10 Electron Framework 0x114ef8df4 v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3331640
11 Electron Framework 0x114efc19c v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3344864
12 Electron Framework 0x114ef882c v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3330160
13 Electron Framework 0x113547930 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 13512
14 Electron Framework 0x113548a64 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 17916
15 Electron Framework 0x1135485e0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 16760
16 Electron Framework 0x113546ff0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 11144
17 Electron Framework 0x1135474c0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 12376
18 Electron Framework 0x1132b3b68 ElectronMain + 128
19 dyld 0x1812cfe50 start + 2544
```
[Complete crash log](https://github.com/vector-im/element-web/files/9909141/element.txt)
Note: it does launch if Rosetta is enabled for the executable.
Personal note: I am running Element in a VM because the homeserver I need to connect to requires a VPN and I don't want to enable the VPN system-wide on my main machine because it's extremely slow and because of privacy concerns.
### Operating system
macOS Ventura 13.0 in a virtual machine
### Application version
1.11.12
### How did you install the app?
https://element.io/get-started
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Unable to launch Element in a macOS virtual machine - ### Steps to reproduce
1. Set up a macOS Monterey or Ventura guest virtual machine on a macOS host with an M1 processor
2. Download Element.dmg from element.io and install
3. Run Element.app
### Outcome
#### What did you expect?
Expected the Element.app to start
#### What happened instead?
The application crashes on launch and produced a crash log:
```
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000011475d54c
Termination Reason: Namespace SIGNAL, Code 5 Trace/BPT trap: 5
Terminating Process: exc handler [956]
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 Electron Framework 0x11475d54c v8::internal::wasm::WasmCodeManager::WasmCodeManager() + 92
1 Electron Framework 0x11475d538 v8::internal::wasm::WasmCodeManager::WasmCodeManager() + 72
2 Electron Framework 0x11476ce94 v8::internal::wasm::WasmEngine::InitializeOncePerProcess() + 44
3 Electron Framework 0x114360068 v8::internal::IsolateAllocator::GetPtrComprCage() const + 2188
4 Electron Framework 0x114097684 v8::V8::Initialize(int) + 32
5 Electron Framework 0x116f776cc v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 37404432
6 Electron Framework 0x116f75924 v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 37396840
7 Electron Framework 0x11338aa04 v8::Signature::New(v8::Isolate*, v8::Local<v8::FunctionTemplate>) + 10416
8 Electron Framework 0x11338a7cc v8::Signature::New(v8::Isolate*, v8::Local<v8::FunctionTemplate>) + 9848
9 Electron Framework 0x1133765a4 v8::internal::compiler::RawMachineAssembler::TargetParameter() + 8816
10 Electron Framework 0x114ef8df4 v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3331640
11 Electron Framework 0x114efc19c v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3344864
12 Electron Framework 0x114ef882c v8::internal::SetupIsolateDelegate::SetupHeap(v8::internal::Heap*) + 3330160
13 Electron Framework 0x113547930 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 13512
14 Electron Framework 0x113548a64 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 17916
15 Electron Framework 0x1135485e0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 16760
16 Electron Framework 0x113546ff0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 11144
17 Electron Framework 0x1135474c0 v8::internal::compiler::BasicBlock::set_loop_header(v8::internal::compiler::BasicBlock*) + 12376
18 Electron Framework 0x1132b3b68 ElectronMain + 128
19 dyld 0x1812cfe50 start + 2544
```
[Complete crash log](https://github.com/vector-im/element-web/files/9909141/element.txt)
Note: it does launch if Rosetta is enabled for the executable.
Personal note: I am running Element in a VM because the homeserver I need to connect to requires a VPN and I don't want to enable the VPN system-wide on my main machine because it's extremely slow and because of privacy concerns.
### Operating system
macOS Ventura 13.0 in a virtual machine
### Application version
1.11.12
### How did you install the app?
https://element.io/get-started
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
unable to launch element in a macos virtual machine steps to reproduce set up a macos monterey or ventura guest virtual machine on a macos host with an processor download element dmg from element io and install run element app outcome what did you expect expected the element app to start what happened instead the application crashes on launch and produced a crash log crashed thread dispatch queue com apple main thread exception type exc breakpoint sigtrap exception codes termination reason namespace signal code trace bpt trap terminating process exc handler thread crashed dispatch queue com apple main thread electron framework internal wasm wasmcodemanager wasmcodemanager electron framework internal wasm wasmcodemanager wasmcodemanager electron framework internal wasm wasmengine initializeonceperprocess electron framework internal isolateallocator getptrcomprcage const electron framework initialize int electron framework internal setupisolatedelegate setupheap internal heap electron framework internal setupisolatedelegate setupheap internal heap electron framework signature new isolate local electron framework signature new isolate local electron framework internal compiler rawmachineassembler targetparameter electron framework internal setupisolatedelegate setupheap internal heap electron framework internal setupisolatedelegate setupheap internal heap electron framework internal setupisolatedelegate setupheap internal heap electron framework internal compiler basicblock set loop header internal compiler basicblock electron framework internal compiler basicblock set loop header internal compiler basicblock electron framework internal compiler basicblock set loop header internal compiler basicblock electron framework internal compiler basicblock set loop header internal compiler basicblock electron framework internal compiler basicblock set loop header internal compiler basicblock electron framework electronmain dyld start note it does launch if rosetta is enabled for the executable personal note i am running element in a vm because the homeserver i need to connect to requires a vpn and i don t want to enable the vpn system wide on my main machine because it s extremely slow and because of privacy concerns operating system macos ventura in a virtual machine application version how did you install the app homeserver matrix org will you send logs no
| 1
|
113,796
| 9,663,557,979
|
IssuesEvent
|
2019-05-21 01:15:29
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
monitoring - the app monitoring-operator fails to upgrade manually
|
[zube]: To Test area/monitoring kind/bug-qa status/ready-for-review status/reopened status/resolved status/to-test team/cn
|
<!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- run Rancher: v2.2.3 container
- add an RKE EC2 cluster
- enable cluster monitoring
- upgrade Rancher to v2.2.4-rc5
- go to project `system` - `Apps`
- upgrade the app `cluster-monitoring` up to date (v0.0.3)
- wait for `cluster-monitoring` to be active
- upgrade the app `monitoring-operator` up to date (v0.0.3)
**Result:**
- upgrading fails with the following error:
Failed to install app monitoring-operator. Error: UPGRADE FAILED: Deployment.apps "prometheus-operator-monitoring-operator" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"prometheus-operator", "chart":"prometheus-operator-0.0.1", "release":"monitoring-operator"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
<img width="1376" alt="screenshot" src="https://user-images.githubusercontent.com/6218999/57895372-2229ab00-7800-11e9-8b27-36ae6cda3646.png">
|
2.0
|
monitoring - the app monitoring-operator fails to upgrade manually - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- run Rancher: v2.2.3 container
- add an RKE EC2 cluster
- enable cluster monitoring
- upgrade Rancher to v2.2.4-rc5
- go to project `system` - `Apps`
- upgrade the app `cluster-monitoring` up to date (v0.0.3)
- wait for `cluster-monitoring` to be active
- upgrade the app `monitoring-operator` up to date (v0.0.3)
**Result:**
- upgrading fails with the following error:
Failed to install app monitoring-operator. Error: UPGRADE FAILED: Deployment.apps "prometheus-operator-monitoring-operator" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"prometheus-operator", "chart":"prometheus-operator-0.0.1", "release":"monitoring-operator"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
<img width="1376" alt="screenshot" src="https://user-images.githubusercontent.com/6218999/57895372-2229ab00-7800-11e9-8b27-36ae6cda3646.png">
|
non_defect
|
monitoring the app monitoring operator fails to upgrade manually please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible run rancher container add an rke cluster enable cluster monitoring upgrade rancher to go to project system apps upgrade the app cluster monitoring up to date wait for cluster monitoring to be active upgrade the app monitoring operator up to date result upgrading fails with the following error failed to install app monitoring operator error upgrade failed deployment apps prometheus operator monitoring operator is invalid spec selector invalid value labelselector matchlabels map string app prometheus operator chart prometheus operator release monitoring operator matchexpressions labelselectorrequirement nil field is immutable img width alt screenshot src
| 0
|
73,516
| 24,667,113,901
|
IssuesEvent
|
2022-10-18 11:09:31
|
scoutplan/scoutplan
|
https://api.github.com/repos/scoutplan/scoutplan
|
closed
|
[Scoutplan Production/production] NoMethodError: undefined method `event_organizer?' for nil:NilClass
|
defect
|
## Backtrace
line 32 of [PROJECT_ROOT]/app/policies/event_policy.rb: rsvps?
line 36 of [PROJECT_ROOT]/app/policies/event_policy.rb: organize?
line 1 of [PROJECT_ROOT]/app/views/events/partials/event_row/_organize.slim: _app_views_events_partials_event_row__organize_slim___4066292251475156700_202500
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/89279791)
|
1.0
|
[Scoutplan Production/production] NoMethodError: undefined method `event_organizer?' for nil:NilClass - ## Backtrace
line 32 of [PROJECT_ROOT]/app/policies/event_policy.rb: rsvps?
line 36 of [PROJECT_ROOT]/app/policies/event_policy.rb: organize?
line 1 of [PROJECT_ROOT]/app/views/events/partials/event_row/_organize.slim: _app_views_events_partials_event_row__organize_slim___4066292251475156700_202500
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/89279791)
|
defect
|
nomethoderror undefined method event organizer for nil nilclass backtrace line of app policies event policy rb rsvps line of app policies event policy rb organize line of app views events partials event row organize slim app views events partials event row organize slim
| 1
|
161,243
| 6,111,431,454
|
IssuesEvent
|
2017-06-21 17:01:20
|
TerraFusion/basicFusion
|
https://api.github.com/repos/TerraFusion/basicFusion
|
opened
|
Change COMPUTE_TERRA variables
|
enhancement Medium Priority
|
The variables need to be changed so that they are initially empty, then users will set the variables to what they need.
|
1.0
|
Change COMPUTE_TERRA variables - The variables need to be changed so that they are initially empty, then users will set the variables to what they need.
|
non_defect
|
change compute terra variables the variables need to be changed so that they are initially empty then users will set the variables to what they need
| 0
|
15,599
| 10,325,230,410
|
IssuesEvent
|
2019-09-01 15:43:38
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Duplicate subnet names results in only one subnet created but terraform reporting success
|
bug service/subnets
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
```
Terraform v0.11.7
+ provider.azurerm v1.9.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* azurerm_subnet
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
locals {
name = "demo"
}
resource "azurerm_resource_group" "rg" {
name = "${local.name}"
location = "northeurope"
}
resource "azurerm_virtual_network" "vnet" {
name = "${local.name}vnet"
location = "northeurope"
address_space = ["172.10.0.0/19"]
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "app_subnet" {
name = "${format("%s-app", local.name)}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "172.10.0.0/23"
}
resource "azurerm_subnet" "database_subnet" {
name = "${format("%s-app", local.name)}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "172.10.24.0/24"
}
```
### Output
```
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue# terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ azurerm_resource_group.rg
id: <computed>
location: "northeurope"
name: "demo"
tags.%: <computed>
+ azurerm_subnet.app_subnet
id: <computed>
address_prefix: "172.10.0.0/23"
ip_configurations.#: <computed>
name: "demo-app"
resource_group_name: "demo"
virtual_network_name: "demovnet"
+ azurerm_subnet.database_subnet
id: <computed>
address_prefix: "172.10.24.0/24"
ip_configurations.#: <computed>
name: "demo-app"
resource_group_name: "demo"
virtual_network_name: "demovnet"
+ azurerm_virtual_network.vnet
id: <computed>
address_space.#: "1"
address_space.0: "172.10.0.0/19"
location: "northeurope"
name: "demovnet"
resource_group_name: "demo"
subnet.#: <computed>
tags.%: <computed>
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_resource_group.rg: Creating...
location: "" => "northeurope"
name: "" => "demo"
tags.%: "" => "<computed>"
azurerm_resource_group.rg: Creation complete after 0s (ID: /subscriptions/null/resourceGroups/demo)
azurerm_virtual_network.vnet: Creating...
address_space.#: "" => "1"
address_space.0: "" => "172.10.0.0/19"
location: "" => "northeurope"
name: "" => "demovnet"
resource_group_name: "" => "demo"
subnet.#: "" => "<computed>"
tags.%: "" => "<computed>"
azurerm_virtual_network.vnet: Still creating... (10s elapsed)
azurerm_virtual_network.vnet: Creation complete after 11s (ID: /subscriptions/null...osoft.Network/virtualNetworks/demovnet)
azurerm_subnet.database_subnet: Creating...
address_prefix: "" => "172.10.24.0/24"
ip_configurations.#: "" => "<computed>"
name: "" => "demo-app"
resource_group_name: "" => "demo"
virtual_network_name: "" => "demovnet"
azurerm_subnet.app_subnet: Creating...
address_prefix: "" => "172.10.0.0/23"
ip_configurations.#: "" => "<computed>"
name: "" => "demo-app"
resource_group_name: "" => "demo"
virtual_network_name: "" => "demovnet"
azurerm_subnet.app_subnet: Creation complete after 1s (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
azurerm_subnet.database_subnet: Still creating... (10s elapsed)
azurerm_subnet.database_subnet: Still creating... (20s elapsed)
azurerm_subnet.database_subnet: Still creating... (30s elapsed)
azurerm_subnet.database_subnet: Still creating... (40s elapsed)
azurerm_subnet.database_subnet: Still creating... (50s elapsed)
azurerm_subnet.database_subnet: Still creating... (1m0s elapsed)
azurerm_subnet.database_subnet: Creation complete after 1m1s (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue# terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
azurerm_resource_group.rg: Refreshing state... (ID: /subscriptions/null/resourceGroups/demo)
azurerm_virtual_network.vnet: Refreshing state... (ID: /subscriptions/null...osoft.Network/virtualNetworks/demovnet)
azurerm_subnet.database_subnet: Refreshing state... (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
azurerm_subnet.app_subnet: Refreshing state... (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ azurerm_subnet.app_subnet
address_prefix: "172.10.24.0/24" => "172.10.0.0/23"
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue#
```
### Expected Behavior
Terraform throws an error during plan or apply phase stating that there are duplicate names.
### Actual Behavior
Plan shows that it will create all resources. In reality only the first subnet is created since the second one has the same name. Running plan (or apply) a second time shows that terraform wants to create the missing subnet but it sees it as a change rather than create since it retrieves details for the first subnet, this appears as an address change.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform plan`
2. `terraform apply`
3. `terraform plan`
|
1.0
|
Duplicate subnet names results in only one subnet created but terraform reporting success - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
```
Terraform v0.11.7
+ provider.azurerm v1.9.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* azurerm_subnet
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
locals {
name = "demo"
}
resource "azurerm_resource_group" "rg" {
name = "${local.name}"
location = "northeurope"
}
resource "azurerm_virtual_network" "vnet" {
name = "${local.name}vnet"
location = "northeurope"
address_space = ["172.10.0.0/19"]
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "app_subnet" {
name = "${format("%s-app", local.name)}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "172.10.0.0/23"
}
resource "azurerm_subnet" "database_subnet" {
name = "${format("%s-app", local.name)}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "172.10.24.0/24"
}
```
### Output
```
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue# terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ azurerm_resource_group.rg
id: <computed>
location: "northeurope"
name: "demo"
tags.%: <computed>
+ azurerm_subnet.app_subnet
id: <computed>
address_prefix: "172.10.0.0/23"
ip_configurations.#: <computed>
name: "demo-app"
resource_group_name: "demo"
virtual_network_name: "demovnet"
+ azurerm_subnet.database_subnet
id: <computed>
address_prefix: "172.10.24.0/24"
ip_configurations.#: <computed>
name: "demo-app"
resource_group_name: "demo"
virtual_network_name: "demovnet"
+ azurerm_virtual_network.vnet
id: <computed>
address_space.#: "1"
address_space.0: "172.10.0.0/19"
location: "northeurope"
name: "demovnet"
resource_group_name: "demo"
subnet.#: <computed>
tags.%: <computed>
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_resource_group.rg: Creating...
location: "" => "northeurope"
name: "" => "demo"
tags.%: "" => "<computed>"
azurerm_resource_group.rg: Creation complete after 0s (ID: /subscriptions/null/resourceGroups/demo)
azurerm_virtual_network.vnet: Creating...
address_space.#: "" => "1"
address_space.0: "" => "172.10.0.0/19"
location: "" => "northeurope"
name: "" => "demovnet"
resource_group_name: "" => "demo"
subnet.#: "" => "<computed>"
tags.%: "" => "<computed>"
azurerm_virtual_network.vnet: Still creating... (10s elapsed)
azurerm_virtual_network.vnet: Creation complete after 11s (ID: /subscriptions/null...osoft.Network/virtualNetworks/demovnet)
azurerm_subnet.database_subnet: Creating...
address_prefix: "" => "172.10.24.0/24"
ip_configurations.#: "" => "<computed>"
name: "" => "demo-app"
resource_group_name: "" => "demo"
virtual_network_name: "" => "demovnet"
azurerm_subnet.app_subnet: Creating...
address_prefix: "" => "172.10.0.0/23"
ip_configurations.#: "" => "<computed>"
name: "" => "demo-app"
resource_group_name: "" => "demo"
virtual_network_name: "" => "demovnet"
azurerm_subnet.app_subnet: Creation complete after 1s (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
azurerm_subnet.database_subnet: Still creating... (10s elapsed)
azurerm_subnet.database_subnet: Still creating... (20s elapsed)
azurerm_subnet.database_subnet: Still creating... (30s elapsed)
azurerm_subnet.database_subnet: Still creating... (40s elapsed)
azurerm_subnet.database_subnet: Still creating... (50s elapsed)
azurerm_subnet.database_subnet: Still creating... (1m0s elapsed)
azurerm_subnet.database_subnet: Creation complete after 1m1s (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue# terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
azurerm_resource_group.rg: Refreshing state... (ID: /subscriptions/null/resourceGroups/demo)
azurerm_virtual_network.vnet: Refreshing state... (ID: /subscriptions/null...osoft.Network/virtualNetworks/demovnet)
azurerm_subnet.database_subnet: Refreshing state... (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
azurerm_subnet.app_subnet: Refreshing state... (ID: /subscriptions/null...tualNetworks/demovnet/subnets/demo-app)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ azurerm_subnet.app_subnet
address_prefix: "172.10.24.0/24" => "172.10.0.0/23"
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
(cloud-tools) ahmed@yv-666:~/Projects/null/vnetissue#
```
### Expected Behavior
Terraform throws an error during plan or apply phase stating that there are duplicate names.
### Actual Behavior
Plan shows that it will create all resources. In reality only the first subnet is created since the second one has the same name. Running plan (or apply) a second time shows that terraform wants to create the missing subnet but it sees it as a change rather than create since it retrieves details for the first subnet, this appears as an address change.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform plan`
2. `terraform apply`
3. `terraform plan`
|
non_defect
|
duplicate subnet names results in only one subnet created but terraform reporting success community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider azurerm affected resource s azurerm subnet terraform configuration files hcl locals name demo resource azurerm resource group rg name local name location northeurope resource azurerm virtual network vnet name local name vnet location northeurope address space resource group name azurerm resource group rg name resource azurerm subnet app subnet name format s app local name virtual network name azurerm virtual network vnet name resource group name azurerm resource group rg name address prefix resource azurerm subnet database subnet name format s app local name virtual network name azurerm virtual network vnet name resource group name azurerm resource group rg name address prefix output cloud tools ahmed yv projects null vnetissue terraform apply an execution plan has been generated and is shown below resource actions are indicated with the following symbols create terraform will perform the following actions azurerm resource group rg id location northeurope name demo tags azurerm subnet app subnet id address prefix ip configurations name demo app resource group name demo virtual network name demovnet azurerm subnet database subnet id address prefix ip configurations name demo app resource group name demo virtual network name demovnet azurerm virtual network vnet id address space address space location northeurope name demovnet resource group name demo subnet tags plan to add to change to destroy do you want to perform these actions terraform will perform the actions described above only yes will be accepted to approve enter a value yes azurerm resource group rg creating location northeurope name demo tags azurerm resource group rg creation complete after id subscriptions null resourcegroups demo azurerm virtual network vnet creating address space address space location northeurope name demovnet resource group name demo subnet tags azurerm virtual network vnet still creating elapsed azurerm virtual network vnet creation complete after id subscriptions null osoft network virtualnetworks demovnet azurerm subnet database subnet creating address prefix ip configurations name demo app resource group name demo virtual network name demovnet azurerm subnet app subnet creating address prefix ip configurations name demo app resource group name demo virtual network name demovnet azurerm subnet app subnet creation complete after id subscriptions null tualnetworks demovnet subnets demo app azurerm subnet database subnet still creating elapsed azurerm subnet database subnet still creating elapsed azurerm subnet database subnet still creating elapsed azurerm subnet database subnet still creating elapsed azurerm subnet database subnet still creating elapsed azurerm subnet database subnet still creating elapsed azurerm subnet database subnet creation complete after id subscriptions null tualnetworks demovnet subnets demo app apply complete resources added changed destroyed cloud tools ahmed yv projects null vnetissue terraform plan refreshing terraform state in memory prior to plan the refreshed state will be used to calculate this plan but will not be persisted to local or remote state storage azurerm resource group rg refreshing state id subscriptions null resourcegroups demo azurerm virtual network vnet refreshing state id subscriptions null osoft network virtualnetworks demovnet azurerm subnet database subnet refreshing state id subscriptions null tualnetworks demovnet subnets demo app azurerm subnet app subnet refreshing state id subscriptions null tualnetworks demovnet subnets demo app an execution plan has been generated and is shown below resource actions are indicated with the following symbols update in place terraform will perform the following actions azurerm subnet app subnet address prefix plan to add to change to destroy note you didn t specify an out parameter to save this plan so terraform can t guarantee that exactly these actions will be performed if terraform apply is subsequently run cloud tools ahmed yv projects null vnetissue expected behavior terraform throws an error during plan or apply phase stating that there are duplicate names actual behavior plan shows that it will create all resources in reality only the first subnet is created since the second one has the same name running plan or apply a second time shows that terraform wants to create the missing subnet but it sees it as a change rather than create since it retrieves details for the first subnet this appears as an address change steps to reproduce terraform plan terraform apply terraform plan
| 0
|
123,275
| 10,261,643,859
|
IssuesEvent
|
2019-08-22 10:29:45
|
chainer/chainer
|
https://api.github.com/repos/chainer/chainer
|
closed
|
flaky test: `tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py::TestNegativeSamplingFunction`
|
cat:test pr-ongoing prio:high
|
Occurred in #7955
https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1846/TEST=CHAINERX_chainer-py3,label=mn1-p100/console
>`FAIL ../../repo/tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py::TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]}::test_backward`
```
02:40:39 _ TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]}.test_backward _
02:40:39
02:40:39 self = <chainer.testing._bundle.TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cud...batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]} testMethod=test_backward>
02:40:39 backend_config = <BackendConfig use_chainerx=True chainerx_device='native:0' use_cuda=False cuda_device=None use_cudnn='never' cudnn_deterministic=False autotune=False cudnn_fast_batch_normalization=False use_ideep='never'>
02:40:39
02:40:39 def test_backward(self, backend_config):
02:40:39 sampler = make_sampler(backend_config, self.label_size)
02:40:39 x_data = backend_config.get_array(self.x)
02:40:39 t_data = backend_config.get_array(self.t)
02:40:39 w_data = backend_config.get_array(self.w)
02:40:39 y_grad = backend_config.get_array(self.gy)
02:40:39
02:40:39 def f(x, w):
02:40:39 return functions.negative_sampling(
02:40:39 x, t_data, w, sampler, self.sample_size, reduce=self.reduce)
02:40:39
02:40:39 with backend_config:
02:40:39 gradient_check.check_backward(
02:40:39 > f, (x_data, w_data), y_grad, **self.check_backward_options)
02:40:39
02:40:39 backend_config = <BackendConfig use_chainerx=True chainerx_device='native:0' use_cuda=False cuda_device=None use_cudnn='never' cudnn_deterministic=False autotune=False cudnn_fast_batch_normalization=False use_ideep='never'>
02:40:39 f = <function TestNegativeSamplingFunction.test_backward.<locals>.f at 0x7f4c49ee1378>
02:40:39 sampler = <function make_sampler.<locals>.sampler at 0x7f4c49edb7b8>
02:40:39 self = <chainer.testing._bundle.TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cud...batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]} testMethod=test_backward>
02:40:39 t_data = array([-1, 1, 2], shape=(3,), dtype=int32, device='native:0')
02:40:39 w_data = array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 [0.17065430, -0.70361328, 0.69091797],
02:40:39 [0.67382812, 0.270...7672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 x_data = array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 [-0.73828125, -0.53515625, -0.97998047]], shape=(3, 3), dtype=float16, device='native:0')
02:40:39 y_grad = array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39
02:40:39 /repo/tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py:147:
02:40:39 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:893: in check_backward
02:40:39 detect_nondifferentiable, is_immutable_params=False
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:466: in run
02:40:39 self._run()
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:509: in _run
02:40:39 self._compare_gradients(gx_numeric, gx_backward, directions)
02:40:39 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
02:40:39
02:40:39 self = <chainer.gradient_check._CheckBackward object at 0x7f4c49edaf28>
02:40:39 gx_numeric = array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 gx_backward = array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 directions = [array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 [-0.09719047, -0.01106765, 0.18228922],
02:40:39 [-0.43758532, 0.0...7922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')]
02:40:39
02:40:39 def _compare_gradients(self, gx_numeric, gx_backward, directions):
02:40:39 atol = self.atol
02:40:39 rtol = self.rtol
02:40:39 # Compare the gradients
02:40:39 try:
02:40:39 testing.assert_allclose(
02:40:39 gx_numeric, gx_backward, atol=atol, rtol=rtol)
02:40:39 except AssertionError as e:
02:40:39 eps = self.eps
02:40:39 x_data = self.x_data
02:40:39 y_grad = self.y_grad
02:40:39 f = six.StringIO()
02:40:39 f.write('check_backward failed (eps={} atol={} rtol={})\n'.format(
02:40:39 eps, atol, rtol))
02:40:39 for i, x_ in enumerate(x_data):
02:40:39 f.write('inputs[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(x_))
02:40:39 for i, gy_ in enumerate(y_grad):
02:40:39 f.write('grad_outputs[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(gy_))
02:40:39 for i, d_ in enumerate(directions):
02:40:39 f.write('directions[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(d_))
02:40:39 f.write('gradients (numeric): {}\n'.format(gx_numeric))
02:40:39 f.write('gradients (backward): {}\n'.format(gx_backward))
02:40:39 f.write('\n')
02:40:39 f.write(str(e))
02:40:39 > raise AssertionError(f.getvalue())
02:40:39 E AssertionError: Parameterized test failed.
02:40:39 E
02:40:39 E Base test method: TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never.test_backward
02:40:39 E Test parameters:
02:40:39 E dtype: <class 'numpy.float16'>
02:40:39 E reduce: no
02:40:39 E t: [-1, 1, 2]
02:40:39 E
02:40:39 E
02:40:39 E (caused by)
02:40:39 E AssertionError: check_backward failed (eps=0.01 atol=0.0005 rtol=0.005)
02:40:39 E inputs[0]:
02:40:39 E array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 E [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 E [-0.73828125, -0.53515625, -0.97998047]], shape=(3, 3), dtype=float16, device='native:0')
02:40:39 E inputs[1]:
02:40:39 E array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 E [0.17065430, -0.70361328, 0.69091797],
02:40:39 E [0.67382812, 0.27050781, -0.37402344],
02:40:39 E [-0.12481689, -0.07672119, -0.67626953],
02:40:39 E [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 E grad_outputs[0]:
02:40:39 E array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39 E directions[0]:
02:40:39 E array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 E [-0.09719047, -0.01106765, 0.18228922],
02:40:39 E [-0.43758532, 0.08800463, 0.1916482 ]], shape=(3, 3), dtype=float64, device='native:0')
02:40:39 E directions[1]:
02:40:39 E array([[0.18170974, -0.16635855, 0.08452609],
02:40:39 E [0.07180114, -0.04415334, 0.11979608],
02:40:39 E [0.13535458, 0.42491666, 0.2189418 ],
02:40:39 E [-0.11354574, -0.47922230, -0.06600605],
02:40:39 E [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')
02:40:39 E gradients (numeric): array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 E gradients (backward): array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 E
02:40:39 E
02:40:39 E Not equal to tolerance rtol=0.005, atol=0.0005
02:40:39 E
02:40:39 E Mismatch: 100%
02:40:39 E Max absolute difference: 0.00065485
02:40:39 E Max relative difference: 0.02264075
02:40:39 E x: array(0.028269)
02:40:39 E y: array(0.028923)
02:40:39 E
02:40:39 E assert_allclose failed:
02:40:39 E shape: () ()
02:40:39 E dtype: float64 float64
02:40:39 E i: (0,)
02:40:39 E x[i]: 0.02826857380922223
02:40:39 E y[i]: 0.02892342183000904
02:40:39 E relative error[i]: 0.022640752004916033
02:40:39 E absolute error[i]: 0.0006548480207868093
02:40:39 E x: 0.02826857
02:40:39 E y: 0.02892342
02:40:39
02:40:39 atol = 0.0005
02:40:39 d_ = array([[0.18170974, -0.16635855, 0.08452609],
02:40:39 [0.07180114, -0.04415334, 0.11979608],
02:40:39 [0.13535458, 0.4249...47922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')
02:40:39 directions = [array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 [-0.09719047, -0.01106765, 0.18228922],
02:40:39 [-0.43758532, 0.0...7922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')]
02:40:39 eps = 0.01
02:40:39 f = <_io.StringIO object at 0x7f4c4a0e6ca8>
02:40:39 gx_backward = array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 gx_numeric = array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 gy_ = array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39 i = 1
02:40:39 rtol = 0.005
02:40:39 self = <chainer.gradient_check._CheckBackward object at 0x7f4c49edaf28>
02:40:39 x_ = array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 [0.17065430, -0.70361328, 0.69091797],
02:40:39 [0.67382812, 0.270...7672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 x_data = (array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 [-0.73828125, ...672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0'))
02:40:39 y_grad = (array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0'),)
```
|
1.0
|
flaky test: `tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py::TestNegativeSamplingFunction` - Occurred in #7955
https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1846/TEST=CHAINERX_chainer-py3,label=mn1-p100/console
>`FAIL ../../repo/tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py::TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]}::test_backward`
```
02:40:39 _ TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]}.test_backward _
02:40:39
02:40:39 self = <chainer.testing._bundle.TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cud...batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]} testMethod=test_backward>
02:40:39 backend_config = <BackendConfig use_chainerx=True chainerx_device='native:0' use_cuda=False cuda_device=None use_cudnn='never' cudnn_deterministic=False autotune=False cudnn_fast_batch_normalization=False use_ideep='never'>
02:40:39
02:40:39 def test_backward(self, backend_config):
02:40:39 sampler = make_sampler(backend_config, self.label_size)
02:40:39 x_data = backend_config.get_array(self.x)
02:40:39 t_data = backend_config.get_array(self.t)
02:40:39 w_data = backend_config.get_array(self.w)
02:40:39 y_grad = backend_config.get_array(self.gy)
02:40:39
02:40:39 def f(x, w):
02:40:39 return functions.negative_sampling(
02:40:39 x, t_data, w, sampler, self.sample_size, reduce=self.reduce)
02:40:39
02:40:39 with backend_config:
02:40:39 gradient_check.check_backward(
02:40:39 > f, (x_data, w_data), y_grad, **self.check_backward_options)
02:40:39
02:40:39 backend_config = <BackendConfig use_chainerx=True chainerx_device='native:0' use_cuda=False cuda_device=None use_cudnn='never' cudnn_deterministic=False autotune=False cudnn_fast_batch_normalization=False use_ideep='never'>
02:40:39 f = <function TestNegativeSamplingFunction.test_backward.<locals>.f at 0x7f4c49ee1378>
02:40:39 sampler = <function make_sampler.<locals>.sampler at 0x7f4c49edb7b8>
02:40:39 self = <chainer.testing._bundle.TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cud...batch_normalization_false__use_ideep_never_param_3_{dtype=float16, reduce='no', t=[-1, 1, 2]} testMethod=test_backward>
02:40:39 t_data = array([-1, 1, 2], shape=(3,), dtype=int32, device='native:0')
02:40:39 w_data = array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 [0.17065430, -0.70361328, 0.69091797],
02:40:39 [0.67382812, 0.270...7672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 x_data = array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 [-0.73828125, -0.53515625, -0.97998047]], shape=(3, 3), dtype=float16, device='native:0')
02:40:39 y_grad = array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39
02:40:39 /repo/tests/chainer_tests/functions_tests/loss_tests/test_negative_sampling.py:147:
02:40:39 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:893: in check_backward
02:40:39 detect_nondifferentiable, is_immutable_params=False
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:466: in run
02:40:39 self._run()
02:40:39 /workspace/conda/envs/testenv/lib/python3.6/site-packages/chainer/gradient_check.py:509: in _run
02:40:39 self._compare_gradients(gx_numeric, gx_backward, directions)
02:40:39 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
02:40:39
02:40:39 self = <chainer.gradient_check._CheckBackward object at 0x7f4c49edaf28>
02:40:39 gx_numeric = array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 gx_backward = array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 directions = [array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 [-0.09719047, -0.01106765, 0.18228922],
02:40:39 [-0.43758532, 0.0...7922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')]
02:40:39
02:40:39 def _compare_gradients(self, gx_numeric, gx_backward, directions):
02:40:39 atol = self.atol
02:40:39 rtol = self.rtol
02:40:39 # Compare the gradients
02:40:39 try:
02:40:39 testing.assert_allclose(
02:40:39 gx_numeric, gx_backward, atol=atol, rtol=rtol)
02:40:39 except AssertionError as e:
02:40:39 eps = self.eps
02:40:39 x_data = self.x_data
02:40:39 y_grad = self.y_grad
02:40:39 f = six.StringIO()
02:40:39 f.write('check_backward failed (eps={} atol={} rtol={})\n'.format(
02:40:39 eps, atol, rtol))
02:40:39 for i, x_ in enumerate(x_data):
02:40:39 f.write('inputs[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(x_))
02:40:39 for i, gy_ in enumerate(y_grad):
02:40:39 f.write('grad_outputs[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(gy_))
02:40:39 for i, d_ in enumerate(directions):
02:40:39 f.write('directions[{}]:\n'.format(i))
02:40:39 f.write('{}\n'.format(d_))
02:40:39 f.write('gradients (numeric): {}\n'.format(gx_numeric))
02:40:39 f.write('gradients (backward): {}\n'.format(gx_backward))
02:40:39 f.write('\n')
02:40:39 f.write(str(e))
02:40:39 > raise AssertionError(f.getvalue())
02:40:39 E AssertionError: Parameterized test failed.
02:40:39 E
02:40:39 E Base test method: TestNegativeSamplingFunction_use_chainerx_true__chainerx_device_native:0__use_cuda_false__cuda_device_None__use_cudnn_never__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never.test_backward
02:40:39 E Test parameters:
02:40:39 E dtype: <class 'numpy.float16'>
02:40:39 E reduce: no
02:40:39 E t: [-1, 1, 2]
02:40:39 E
02:40:39 E
02:40:39 E (caused by)
02:40:39 E AssertionError: check_backward failed (eps=0.01 atol=0.0005 rtol=0.005)
02:40:39 E inputs[0]:
02:40:39 E array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 E [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 E [-0.73828125, -0.53515625, -0.97998047]], shape=(3, 3), dtype=float16, device='native:0')
02:40:39 E inputs[1]:
02:40:39 E array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 E [0.17065430, -0.70361328, 0.69091797],
02:40:39 E [0.67382812, 0.27050781, -0.37402344],
02:40:39 E [-0.12481689, -0.07672119, -0.67626953],
02:40:39 E [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 E grad_outputs[0]:
02:40:39 E array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39 E directions[0]:
02:40:39 E array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 E [-0.09719047, -0.01106765, 0.18228922],
02:40:39 E [-0.43758532, 0.08800463, 0.1916482 ]], shape=(3, 3), dtype=float64, device='native:0')
02:40:39 E directions[1]:
02:40:39 E array([[0.18170974, -0.16635855, 0.08452609],
02:40:39 E [0.07180114, -0.04415334, 0.11979608],
02:40:39 E [0.13535458, 0.42491666, 0.2189418 ],
02:40:39 E [-0.11354574, -0.47922230, -0.06600605],
02:40:39 E [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')
02:40:39 E gradients (numeric): array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 E gradients (backward): array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 E
02:40:39 E
02:40:39 E Not equal to tolerance rtol=0.005, atol=0.0005
02:40:39 E
02:40:39 E Mismatch: 100%
02:40:39 E Max absolute difference: 0.00065485
02:40:39 E Max relative difference: 0.02264075
02:40:39 E x: array(0.028269)
02:40:39 E y: array(0.028923)
02:40:39 E
02:40:39 E assert_allclose failed:
02:40:39 E shape: () ()
02:40:39 E dtype: float64 float64
02:40:39 E i: (0,)
02:40:39 E x[i]: 0.02826857380922223
02:40:39 E y[i]: 0.02892342183000904
02:40:39 E relative error[i]: 0.022640752004916033
02:40:39 E absolute error[i]: 0.0006548480207868093
02:40:39 E x: 0.02826857
02:40:39 E y: 0.02892342
02:40:39
02:40:39 atol = 0.0005
02:40:39 d_ = array([[0.18170974, -0.16635855, 0.08452609],
02:40:39 [0.07180114, -0.04415334, 0.11979608],
02:40:39 [0.13535458, 0.4249...47922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')
02:40:39 directions = [array([[0.08759838, -0.17020353, 0.05803305],
02:40:39 [-0.09719047, -0.01106765, 0.18228922],
02:40:39 [-0.43758532, 0.0...7922230, -0.06600605],
02:40:39 [-0.02690936, -0.02217247, 0.31146886]], shape=(5, 3), dtype=float64, device='native:0')]
02:40:39 eps = 0.01
02:40:39 f = <_io.StringIO object at 0x7f4c4a0e6ca8>
02:40:39 gx_backward = array(0.02892342, shape=(), dtype=float64, device='native:0')
02:40:39 gx_numeric = array(0.02826857, shape=(), dtype=float64, device='native:0')
02:40:39 gy_ = array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0')
02:40:39 i = 1
02:40:39 rtol = 0.005
02:40:39 self = <chainer.gradient_check._CheckBackward object at 0x7f4c49edaf28>
02:40:39 x_ = array([[0.52490234, -0.421875 , -0.69189453],
02:40:39 [0.17065430, -0.70361328, 0.69091797],
02:40:39 [0.67382812, 0.270...7672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0')
02:40:39 x_data = (array([[-0.8359375 , -0.99414062, -0.42675781],
02:40:39 [-0.2902832 , -0.98876953, -0.88037109],
02:40:39 [-0.73828125, ...672119, -0.67626953],
02:40:39 [-0.38159180, -0.52099609, -0.22204590]], shape=(5, 3), dtype=float16, device='native:0'))
02:40:39 y_grad = (array([-0.5703125 , -0.9008789 , -0.65234375], shape=(3,), dtype=float16, device='native:0'),)
```
|
non_defect
|
flaky test tests chainer tests functions tests loss tests test negative sampling py testnegativesamplingfunction occurred in fail repo tests chainer tests functions tests loss tests test negative sampling py testnegativesamplingfunction use chainerx true chainerx device native use cuda false cuda device none use cudnn never cudnn deterministic false autotune false cudnn fast batch normalization false use ideep never param dtype reduce no t test backward testnegativesamplingfunction use chainerx true chainerx device native use cuda false cuda device none use cudnn never cudnn deterministic false autotune false cudnn fast batch normalization false use ideep never param dtype reduce no t test backward self backend config def test backward self backend config sampler make sampler backend config self label size x data backend config get array self x t data backend config get array self t w data backend config get array self w y grad backend config get array self gy def f x w return functions negative sampling x t data w sampler self sample size reduce self reduce with backend config gradient check check backward f x data w data y grad self check backward options backend config f f at sampler sampler at self t data array shape dtype device native w data array shape dtype device native x data array shape dtype device native y grad array shape dtype device native repo tests chainer tests functions tests loss tests test negative sampling py workspace conda envs testenv lib site packages chainer gradient check py in check backward detect nondifferentiable is immutable params false workspace conda envs testenv lib site packages chainer gradient check py in run self run workspace conda envs testenv lib site packages chainer gradient check py in run self compare gradients gx numeric gx backward directions self gx numeric array shape dtype device native gx backward array shape dtype device native directions shape dtype device native def compare gradients self gx numeric gx backward directions atol self atol rtol self rtol compare the gradients try testing assert allclose gx numeric gx backward atol atol rtol rtol except assertionerror as e eps self eps x data self x data y grad self y grad f six stringio f write check backward failed eps atol rtol n format eps atol rtol for i x in enumerate x data f write inputs n format i f write n format x for i gy in enumerate y grad f write grad outputs n format i f write n format gy for i d in enumerate directions f write directions n format i f write n format d f write gradients numeric n format gx numeric f write gradients backward n format gx backward f write n f write str e raise assertionerror f getvalue e assertionerror parameterized test failed e e base test method testnegativesamplingfunction use chainerx true chainerx device native use cuda false cuda device none use cudnn never cudnn deterministic false autotune false cudnn fast batch normalization false use ideep never test backward e test parameters e dtype e reduce no e t e e e caused by e assertionerror check backward failed eps atol rtol e inputs e array e e shape dtype device native e inputs e array e e e e shape dtype device native e grad outputs e array shape dtype device native e directions e array e e shape dtype device native e directions e array e e e e shape dtype device native e gradients numeric array shape dtype device native e gradients backward array shape dtype device native e e e not equal to tolerance rtol atol e e mismatch e max absolute difference e max relative difference e x array e y array e e assert allclose failed e shape e dtype e i e x e y e relative error e absolute error e x e y atol d array shape dtype device native directions shape dtype device native eps f gx backward array shape dtype device native gx numeric array shape dtype device native gy array shape dtype device native i rtol self x array shape dtype device native x data array shape dtype device native y grad array shape dtype device native
| 0
|
61,607
| 17,023,737,595
|
IssuesEvent
|
2021-07-03 03:34:32
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Forskningsavdelningen nominatim record change
|
Component: nominatim Priority: trivial Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 8.18am, Wednesday, 3rd August 2011]**
Our hackerspace has moved, and we would like the old location stricken from the records.
The old place is no longer ours, and any visitor would have to turn away, confused, as there is no clue left there that we have been there.
[http://open.mapquestapi.com/nominatim/v1/details.php?place_id=5736845 Old place record]
[http://open.mapquestapi.com/nominatim/v1/details.php?place_id=2109300600 Newer but not correct place record]
The current place is at Norra Grngesbergsgatan 26 in Malm, Sweden. Just a block away from the old one.
The best thing would be if the "Newer but not correct place record" could be changed to reflect the current location.
Source, so that you know I mean it: http://forskningsavd.se/about/
|
1.0
|
Forskningsavdelningen nominatim record change - **[Submitted to the original trac issue database at 8.18am, Wednesday, 3rd August 2011]**
Our hackerspace has moved, and we would like the old location stricken from the records.
The old place is no longer ours, and any visitor would have to turn away, confused, as there is no clue left there that we have been there.
[http://open.mapquestapi.com/nominatim/v1/details.php?place_id=5736845 Old place record]
[http://open.mapquestapi.com/nominatim/v1/details.php?place_id=2109300600 Newer but not correct place record]
The current place is at Norra Grngesbergsgatan 26 in Malm, Sweden. Just a block away from the old one.
The best thing would be if the "Newer but not correct place record" could be changed to reflect the current location.
Source, so that you know I mean it: http://forskningsavd.se/about/
|
defect
|
forskningsavdelningen nominatim record change our hackerspace has moved and we would like the old location stricken from the records the old place is no longer ours and any visitor would have to turn away confused as there is no clue left there that we have been there the current place is at norra grngesbergsgatan in malm sweden just a block away from the old one the best thing would be if the newer but not correct place record could be changed to reflect the current location source so that you know i mean it
| 1
|
24,744
| 4,088,026,334
|
IssuesEvent
|
2016-06-01 12:27:45
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Wrong precision generated in automatic CAST for DB2 and other databases
|
C: Functionality P: Medium T: Defect
|
The current logic is wrong and produces precisions that are too big:
```java
int scale = ((BigDecimal) converted).scale();
int precision = scale + ((BigDecimal) converted).precision();
```
This is usually not an issue, unless the database's maximum supported precision is reached
|
1.0
|
Wrong precision generated in automatic CAST for DB2 and other databases - The current logic is wrong and produces precisions that are too big:
```java
int scale = ((BigDecimal) converted).scale();
int precision = scale + ((BigDecimal) converted).precision();
```
This is usually not an issue, unless the database's maximum supported precision is reached
|
defect
|
wrong precision generated in automatic cast for and other databases the current logic is wrong and produces precisions that are too big java int scale bigdecimal converted scale int precision scale bigdecimal converted precision this is usually not an issue unless the database s maximum supported precision is reached
| 1
|
283,413
| 24,546,155,747
|
IssuesEvent
|
2022-10-12 08:57:23
|
saucelabs/forwarder
|
https://api.github.com/repos/saucelabs/forwarder
|
closed
|
tests: fix waiting for server start
|
testing
|
At the moment we use `1s` sleep to wait for servers to start this should be changed to active waiting.
|
1.0
|
tests: fix waiting for server start - At the moment we use `1s` sleep to wait for servers to start this should be changed to active waiting.
|
non_defect
|
tests fix waiting for server start at the moment we use sleep to wait for servers to start this should be changed to active waiting
| 0
|
136,482
| 12,716,684,980
|
IssuesEvent
|
2020-06-24 02:43:02
|
mikeyjwilliams/sassy-util-css
|
https://api.github.com/repos/mikeyjwilliams/sassy-util-css
|
opened
|
as a dev I want margin utility classes in media query lg
|
QA documentation enhancement
|
# as a dev I want margin utility classes in media query lg
- [ ] build out margin classes in lg media query
- [ ] test each class out
- [ ] document in margin page
|
1.0
|
as a dev I want margin utility classes in media query lg - # as a dev I want margin utility classes in media query lg
- [ ] build out margin classes in lg media query
- [ ] test each class out
- [ ] document in margin page
|
non_defect
|
as a dev i want margin utility classes in media query lg as a dev i want margin utility classes in media query lg build out margin classes in lg media query test each class out document in margin page
| 0
|
65,418
| 7,878,211,355
|
IssuesEvent
|
2018-06-26 09:33:10
|
mysociety/foi-for-councils
|
https://api.github.com/repos/mysociety/foi-for-councils
|
opened
|
Full Name field should include an aria-describedby attribute
|
f:foi has-blockers t:design
|
Found in https://github.com/mysociety/foi-for-councils/issues/22#issuecomment-396287291
> Can't do this as the fields are provided by https://github.com/ministryofjustice/govuk_elements_form_builder, I have opened an issue ministryofjustice/govuk_elements_form_builder#101
|
1.0
|
Full Name field should include an aria-describedby attribute - Found in https://github.com/mysociety/foi-for-councils/issues/22#issuecomment-396287291
> Can't do this as the fields are provided by https://github.com/ministryofjustice/govuk_elements_form_builder, I have opened an issue ministryofjustice/govuk_elements_form_builder#101
|
non_defect
|
full name field should include an aria describedby attribute found in can t do this as the fields are provided by i have opened an issue ministryofjustice govuk elements form builder
| 0
|
236,838
| 18,110,648,818
|
IssuesEvent
|
2021-09-23 03:07:56
|
Rutulpatel7077/adventofcode-go
|
https://api.github.com/repos/Rutulpatel7077/adventofcode-go
|
opened
|
ExponentPushToken[v1eiDiMpBnl8Cvbebf-MOS]
|
bug - updated documentation
|
### ExponentPushToken[v1eiDiMpBnl8Cvbebf-MOS]
**Description**:
```
Patel
Rutul
Jayeshbhai
```
***
**Device Details:**
- Hello: world
***
**Page Details:**
- Hello: world
***
**Issue created with pointout widget** https://pointout.ca
|
1.0
|
ExponentPushToken[v1eiDiMpBnl8Cvbebf-MOS] -
### ExponentPushToken[v1eiDiMpBnl8Cvbebf-MOS]
**Description**:
```
Patel
Rutul
Jayeshbhai
```
***
**Device Details:**
- Hello: world
***
**Page Details:**
- Hello: world
***
**Issue created with pointout widget** https://pointout.ca
|
non_defect
|
exponentpushtoken exponentpushtoken description patel rutul jayeshbhai device details hello world page details hello world issue created with pointout widget
| 0
|
9,101
| 8,516,884,457
|
IssuesEvent
|
2018-11-01 05:29:18
|
Microsoft/vscode-cpptools
|
https://api.github.com/repos/Microsoft/vscode-cpptools
|
closed
|
Support for custom C/C++ formatting
|
Feature Request Language Service
|
Is it possible to support custom settings for C/C++ formatting (such as indentation, new lines, spacing wrapping), similar to Visual Studio? See https://docs.microsoft.com/en-us/visualstudio/ide/reference/options-text-editor-c-cpp-formatting?view=vs-2017
|
1.0
|
Support for custom C/C++ formatting - Is it possible to support custom settings for C/C++ formatting (such as indentation, new lines, spacing wrapping), similar to Visual Studio? See https://docs.microsoft.com/en-us/visualstudio/ide/reference/options-text-editor-c-cpp-formatting?view=vs-2017
|
non_defect
|
support for custom c c formatting is it possible to support custom settings for c c formatting such as indentation new lines spacing wrapping similar to visual studio see
| 0
|
217,029
| 24,312,739,890
|
IssuesEvent
|
2022-09-30 01:14:14
|
jiw065/Springboot-demo
|
https://api.github.com/repos/jiw065/Springboot-demo
|
opened
|
CVE-2022-38751 (Medium) detected in snakeyaml-1.23.jar
|
security vulnerability
|
## CVE-2022-38751 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /Springboot-demo/spring-boot-demo/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.5.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-38751 (Medium) detected in snakeyaml-1.23.jar - ## CVE-2022-38751 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /Springboot-demo/spring-boot-demo/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.5.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file springboot demo spring boot demo pom xml path to vulnerable library root repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x snakeyaml jar vulnerable library vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
| 0
|
47,969
| 13,067,343,320
|
IssuesEvent
|
2020-07-31 00:09:32
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[steamshovel] memory leak (Trac #1545)
|
Migrated from Trac combo core defect
|
http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath
Migrated from https://code.icecube.wisc.edu/ticket/1545
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"description": "http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1458335655846260",
"component": "combo core",
"summary": "[steamshovel] memory leak",
"priority": "major",
"keywords": "",
"time": "2016-02-10T20:10:36",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
|
1.0
|
[steamshovel] memory leak (Trac #1545) - http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath
Migrated from https://code.icecube.wisc.edu/ticket/1545
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"description": "http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1458335655846260",
"component": "combo core",
"summary": "[steamshovel] memory leak",
"priority": "major",
"keywords": "",
"time": "2016-02-10T20:10:36",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
|
defect
|
memory leak trac migrated from json status closed changetime description reporter david schultz cc resolution invalid ts component combo core summary memory leak priority major keywords time milestone owner hdembinski type defect
| 1
|
658,423
| 21,892,271,504
|
IssuesEvent
|
2022-05-20 03:56:44
|
space-wizards/space-station-14
|
https://api.github.com/repos/space-wizards/space-station-14
|
closed
|
Clicking an item in storage with full hands drops the item on the ground
|
Issue: Bug Priority: 2-Before Release Difficulty: 1-Easy Bug: Replicated
|
## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
Taking things from a storage UI while your hands are full puts the selected item on the ground. It should instead do nothing or use the held item on the one in inventory.
|
1.0
|
Clicking an item in storage with full hands drops the item on the ground - ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
Taking things from a storage UI while your hands are full puts the selected item on the ground. It should instead do nothing or use the held item on the one in inventory.
|
non_defect
|
clicking an item in storage with full hands drops the item on the ground description taking things from a storage ui while your hands are full puts the selected item on the ground it should instead do nothing or use the held item on the one in inventory
| 0
|
184,601
| 21,784,915,380
|
IssuesEvent
|
2022-05-14 01:47:30
|
n-devs/supper-bin
|
https://api.github.com/repos/n-devs/supper-bin
|
closed
|
WS-2019-0318 (High) detected in handlebars-4.1.1.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /supper-bin/package.json</p>
<p>Path to vulnerable library: supper-bin/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0318 (High) detected in handlebars-4.1.1.tgz - autoclosed - ## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /supper-bin/package.json</p>
<p>Path to vulnerable library: supper-bin/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file supper bin package json path to vulnerable library supper bin node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
73,632
| 24,727,525,958
|
IssuesEvent
|
2022-10-20 15:01:16
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Cannot pass null values as arguments for PL/SQL TABLE types
|
T: Defect C: Functionality C: DB: Oracle P: Medium R: Worksforme E: Professional Edition E: Enterprise Edition
|
Another spin-off from https://github.com/jOOQ/jOOQ/issues/14097, with details to follow
|
1.0
|
Cannot pass null values as arguments for PL/SQL TABLE types - Another spin-off from https://github.com/jOOQ/jOOQ/issues/14097, with details to follow
|
defect
|
cannot pass null values as arguments for pl sql table types another spin off from with details to follow
| 1
|
5,660
| 2,610,192,837
|
IssuesEvent
|
2015-02-26 19:00:54
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
引荐怎么去除色斑最有效
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
好像离了太久,我忘了过去自己的样子,一副沧桑的玉颜,��
�透过明镜的反射里,我对他存留一片戒心,季节到了秋,蝉�
��依然很脆鸣,它们在枝头不住地告知,我们都将老去,于是
,听的见山林中那一阵一阵的悲泣声。怎么去除色斑最有效��
�
《客户案例》
黄褐斑最好的治疗方法,
我四十三了,孩子也大了,现在也不用我管了,可自己的事��
�还得自己做,主要是我脸上的黄褐斑已经长了好几年了,以�
��家里事情多也没时间管,现在总算有时间管管自己了,就去
美容院做做这张脸,虽然里面态度和服务都挺好的,可祛斑��
�果并不怎么明显,我就想上网查一下祛斑的方法吧,网上的�
��西比较全面一些,没想到网上的祛斑产品那么多,看的我眼
花缭乱的.也不知道用什么好,祛斑问题这可是一个大问题,�
��能忽视的。</br>
祛斑成为我目前生活中的一件大事。我开始网罗各种有��
�祛斑的信息,做祛斑面膜啊,美容啊,也使用过一些祛斑的�
��品,但是效果还是不太明显,有的刚开始还有一点效果,感
觉变淡了一点,但是,过几天又变得清晰可见了。总之,用��
�这么多的方法,就是没彻底根除掉。朋友们也热心地帮我打�
��有关祛斑的产品,有一次的我的朋友突然打电话告诉我她的
一个远房表妹以前脸上也有很多斑,前几天她去参加他表妹��
�礼时意外的发现人家脸上的斑不见了,而且皮肤也变的白皙�
��嫩。她热心的告诉我,她表妹用的是一款名为黛芙薇尔的产
品,纯天然精华,绿色安全,建议我也尝试一下。我听到之��
�异常兴奋,这次总算找到一款有效的产品。于是我进入产品�
��官网,详细的了解了一下产品,随即订购了两个周期。我相
信,这款产品能帮我圆祛斑之梦。</br>
两个周期使用完之后,脸上的斑点真的奇迹般地消失了��
�我不敢相信这是真的,还想着自己是不是在做梦呢?因为之前
我每天晚上都会梦见祛斑后的自己变得非常漂亮,自信。但��
�晨醒来发现却是一场梦!现在黛芙薇尔终于圆了我这个白雪公
主的梦!
嗯,不错。今天同事夸我了,这个同事我平时叫她姐,她对��
�也像是对小妹妹是的,她说如果不靠近看已经看不到你脸上�
��斑了,皮肤较以前白嫩光滑了许多,没想到黛芙薇尔真的会
给我带来这样大的效果,这是我以前梦寐以求,现在终于实��
�了。看来还是我懒了,不然早点给自己找到黛芙薇尔不就不�
��被同事说了,挺不好意思的。但是还要感谢同事姐姐给我敲
响了警钟!黛芙薇尔怎么样,只有自己试了才知道!我真的觉得
不错!值得朋友们一试!
阅读了怎么去除色斑最有效,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么去除色斑最有效,同时为您分享祛斑小方法
一本中医古书上看到了一则去除雀斑的方子:将黄豆(黄豆��
�生的)浸泡在醋中一个月,每天服用几粒,坚持一段时间即�
��完全消除。 (去斑讲究内外兼修,这点值得一试) 。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:34
|
1.0
|
引荐怎么去除色斑最有效 - ```
《摘要》
好像离了太久,我忘了过去自己的样子,一副沧桑的玉颜,��
�透过明镜的反射里,我对他存留一片戒心,季节到了秋,蝉�
��依然很脆鸣,它们在枝头不住地告知,我们都将老去,于是
,听的见山林中那一阵一阵的悲泣声。怎么去除色斑最有效��
�
《客户案例》
黄褐斑最好的治疗方法,
我四十三了,孩子也大了,现在也不用我管了,可自己的事��
�还得自己做,主要是我脸上的黄褐斑已经长了好几年了,以�
��家里事情多也没时间管,现在总算有时间管管自己了,就去
美容院做做这张脸,虽然里面态度和服务都挺好的,可祛斑��
�果并不怎么明显,我就想上网查一下祛斑的方法吧,网上的�
��西比较全面一些,没想到网上的祛斑产品那么多,看的我眼
花缭乱的.也不知道用什么好,祛斑问题这可是一个大问题,�
��能忽视的。</br>
祛斑成为我目前生活中的一件大事。我开始网罗各种有��
�祛斑的信息,做祛斑面膜啊,美容啊,也使用过一些祛斑的�
��品,但是效果还是不太明显,有的刚开始还有一点效果,感
觉变淡了一点,但是,过几天又变得清晰可见了。总之,用��
�这么多的方法,就是没彻底根除掉。朋友们也热心地帮我打�
��有关祛斑的产品,有一次的我的朋友突然打电话告诉我她的
一个远房表妹以前脸上也有很多斑,前几天她去参加他表妹��
�礼时意外的发现人家脸上的斑不见了,而且皮肤也变的白皙�
��嫩。她热心的告诉我,她表妹用的是一款名为黛芙薇尔的产
品,纯天然精华,绿色安全,建议我也尝试一下。我听到之��
�异常兴奋,这次总算找到一款有效的产品。于是我进入产品�
��官网,详细的了解了一下产品,随即订购了两个周期。我相
信,这款产品能帮我圆祛斑之梦。</br>
两个周期使用完之后,脸上的斑点真的奇迹般地消失了��
�我不敢相信这是真的,还想着自己是不是在做梦呢?因为之前
我每天晚上都会梦见祛斑后的自己变得非常漂亮,自信。但��
�晨醒来发现却是一场梦!现在黛芙薇尔终于圆了我这个白雪公
主的梦!
嗯,不错。今天同事夸我了,这个同事我平时叫她姐,她对��
�也像是对小妹妹是的,她说如果不靠近看已经看不到你脸上�
��斑了,皮肤较以前白嫩光滑了许多,没想到黛芙薇尔真的会
给我带来这样大的效果,这是我以前梦寐以求,现在终于实��
�了。看来还是我懒了,不然早点给自己找到黛芙薇尔不就不�
��被同事说了,挺不好意思的。但是还要感谢同事姐姐给我敲
响了警钟!黛芙薇尔怎么样,只有自己试了才知道!我真的觉得
不错!值得朋友们一试!
阅读了怎么去除色斑最有效,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么去除色斑最有效,同时为您分享祛斑小方法
一本中医古书上看到了一则去除雀斑的方子:将黄豆(黄豆��
�生的)浸泡在醋中一个月,每天服用几粒,坚持一段时间即�
��完全消除。 (去斑讲究内外兼修,这点值得一试) 。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:34
|
defect
|
引荐怎么去除色斑最有效 《摘要》 好像离了太久,我忘了过去自己的样子,一副沧桑的玉颜,�� �透过明镜的反射里,我对他存留一片戒心,季节到了秋,蝉� ��依然很脆鸣,它们在枝头不住地告知,我们都将老去,于是 ,听的见山林中那一阵一阵的悲泣声。怎么去除色斑最有效�� � 《客户案例》 黄褐斑最好的治疗方法 我四十三了,孩子也大了,现在也不用我管了,可自己的事�� �还得自己做,主要是我脸上的黄褐斑已经长了好几年了,以� ��家里事情多也没时间管,现在总算有时间管管自己了,就去 美容院做做这张脸,虽然里面态度和服务都挺好的,可祛斑�� �果并不怎么明显,我就想上网查一下祛斑的方法吧,网上的� ��西比较全面一些,没想到网上的祛斑产品那么多,看的我眼 花缭乱的 也不知道用什么好,祛斑问题这可是一个大问题,� ��能忽视的。 祛斑成为我目前生活中的一件大事。我开始网罗各种有�� �祛斑的信息,做祛斑面膜啊,美容啊,也使用过一些祛斑的� ��品,但是效果还是不太明显,有的刚开始还有一点效果,感 觉变淡了一点,但是,过几天又变得清晰可见了。总之,用�� �这么多的方法,就是没彻底根除掉。朋友们也热心地帮我打� ��有关祛斑的产品,有一次的我的朋友突然打电话告诉我她的 一个远房表妹以前脸上也有很多斑,前几天她去参加他表妹�� �礼时意外的发现人家脸上的斑不见了,而且皮肤也变的白皙� ��嫩。她热心的告诉我,她表妹用的是一款名为黛芙薇尔的产 品,纯天然精华,绿色安全,建议我也尝试一下。我听到之�� �异常兴奋,这次总算找到一款有效的产品。于是我进入产品� ��官网,详细的了解了一下产品,随即订购了两个周期。我相 信,这款产品能帮我圆祛斑之梦。 两个周期使用完之后,脸上的斑点真的奇迹般地消失了�� �我不敢相信这是真的,还想着自己是不是在做梦呢 因为之前 我每天晚上都会梦见祛斑后的自己变得非常漂亮,自信。但�� �晨醒来发现却是一场梦 现在黛芙薇尔终于圆了我这个白雪公 主的梦 嗯,不错。今天同事夸我了,这个同事我平时叫她姐,她对�� �也像是对小妹妹是的,她说如果不靠近看已经看不到你脸上� ��斑了,皮肤较以前白嫩光滑了许多,没想到黛芙薇尔真的会 给我带来这样大的效果,这是我以前梦寐以求,现在终于实�� �了。看来还是我懒了,不然早点给自己找到黛芙薇尔不就不� ��被同事说了,挺不好意思的。但是还要感谢同事姐姐给我敲 响了警钟 黛芙薇尔怎么样,只有自己试了才知道 我真的觉得 不错 值得朋友们一试 阅读了怎么去除色斑最有效,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么去除色斑最有效,同时为您分享祛斑小方法 一本中医古书上看到了一则去除雀斑的方子:将黄豆(黄豆�� �生的)浸泡在醋中一个月,每天服用几粒,坚持一段时间即� ��完全消除。 去斑讲究内外兼修,这点值得一试 。 original issue reported on code google com by additive gmail com on jul at
| 1
|
14,595
| 2,829,610,096
|
IssuesEvent
|
2015-05-23 02:06:28
|
awesomebing1/fuzzdb
|
https://api.github.com/repos/awesomebing1/fuzzdb
|
closed
|
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Ravens-vs-Tennessee-Titans-live-2014-Stream
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Raven
s-vs-Tennessee-Titans-live-2014-Stream
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Raven
s-vs-Tennessee-Titans-live-2014-Stream
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `sabujhos...@gmail.com` on 9 Nov 2014 at 4:26
|
1.0
|
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Ravens-vs-Tennessee-Titans-live-2014-Stream - ```
What steps will reproduce the problem?
1.
2.
3.
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Raven
s-vs-Tennessee-Titans-live-2014-Stream
http://www.rf-dimension.com/forum/entry.php?72445-NFL-FOX-CBS-!!-Baltimore-Raven
s-vs-Tennessee-Titans-live-2014-Stream
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `sabujhos...@gmail.com` on 9 Nov 2014 at 4:26
|
defect
|
what steps will reproduce the problem s vs tennessee titans live stream s vs tennessee titans live stream what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by sabujhos gmail com on nov at
| 1
|
77,386
| 26,959,327,362
|
IssuesEvent
|
2023-02-08 16:59:40
|
AutomatedProcessImprovement/Simod
|
https://api.github.com/repos/AutomatedProcessImprovement/Simod
|
opened
|
Don't mine all simulation parameters in each calendar optimization iteration
|
defect performance
|
The simulation parameters (`json_parameters`) don't have to be discovered in each optimization iteration.
I think that currently, at the beginning of each "calendar optimization" iteration, the arrivals, gateway probabilities, etc. are discovered. There is no need to do all of them each time.
JSON Parameters:
- Arrival distribution:
- Composed of both the `arrival_time_distribution` and `arrival_time_calendar`.
- Only discovered once in the beginning of the main SIMOD optimization (using train log).
- The same values (distribution and calendar) are going to be used during all the process.
- Gateway probabilities:
- Composed of only `gateway_branching_probabilities`.
- Depends directly on the BPMN model, so they have to be discovered when the model changes (once per iteration of the "structure optimization").
- In each "structure optimization" iteration, a new model is discovered, so its gateway probabilities are mined (either equiprobable or real) to be able to run Prosimos with it.
- Once the "structure optimization" ends, there is no need to discover them again, the probabilities of the _best_result_ are the same for the rest of SIMOD.
- Resource profiles:
- Composed of `resource_profiles`, `resource_calendars`, and `task_resource_distribution`.
- Discovered once before the beginning of the "structure optimization", with default parameter values (this result is used in all the iterations of the "structure optimization", and in the extraneous delays' discovery).
- Also, discovered once per iteration of the "calendar optimization", with the parameters of the iteration given by `fmin`, and used in that iteration.
- Once the "calendar optimization" ends, there is no need to discover them again
**Extra**: This is for the SIMOD optimization, once all the stages end, to evaluate the model against test log, all these `json_parameters` are discovered again from `train+validation`.
|
1.0
|
Don't mine all simulation parameters in each calendar optimization iteration - The simulation parameters (`json_parameters`) don't have to be discovered in each optimization iteration.
I think that currently, at the beginning of each "calendar optimization" iteration, the arrivals, gateway probabilities, etc. are discovered. There is no need to do all of them each time.
JSON Parameters:
- Arrival distribution:
- Composed of both the `arrival_time_distribution` and `arrival_time_calendar`.
- Only discovered once in the beginning of the main SIMOD optimization (using train log).
- The same values (distribution and calendar) are going to be used during all the process.
- Gateway probabilities:
- Composed of only `gateway_branching_probabilities`.
- Depends directly on the BPMN model, so they have to be discovered when the model changes (once per iteration of the "structure optimization").
- In each "structure optimization" iteration, a new model is discovered, so its gateway probabilities are mined (either equiprobable or real) to be able to run Prosimos with it.
- Once the "structure optimization" ends, there is no need to discover them again, the probabilities of the _best_result_ are the same for the rest of SIMOD.
- Resource profiles:
- Composed of `resource_profiles`, `resource_calendars`, and `task_resource_distribution`.
- Discovered once before the beginning of the "structure optimization", with default parameter values (this result is used in all the iterations of the "structure optimization", and in the extraneous delays' discovery).
- Also, discovered once per iteration of the "calendar optimization", with the parameters of the iteration given by `fmin`, and used in that iteration.
- Once the "calendar optimization" ends, there is no need to discover them again
**Extra**: This is for the SIMOD optimization, once all the stages end, to evaluate the model against test log, all these `json_parameters` are discovered again from `train+validation`.
|
defect
|
don t mine all simulation parameters in each calendar optimization iteration the simulation parameters json parameters don t have to be discovered in each optimization iteration i think that currently at the beginning of each calendar optimization iteration the arrivals gateway probabilities etc are discovered there is no need to do all of them each time json parameters arrival distribution composed of both the arrival time distribution and arrival time calendar only discovered once in the beginning of the main simod optimization using train log the same values distribution and calendar are going to be used during all the process gateway probabilities composed of only gateway branching probabilities depends directly on the bpmn model so they have to be discovered when the model changes once per iteration of the structure optimization in each structure optimization iteration a new model is discovered so its gateway probabilities are mined either equiprobable or real to be able to run prosimos with it once the structure optimization ends there is no need to discover them again the probabilities of the best result are the same for the rest of simod resource profiles composed of resource profiles resource calendars and task resource distribution discovered once before the beginning of the structure optimization with default parameter values this result is used in all the iterations of the structure optimization and in the extraneous delays discovery also discovered once per iteration of the calendar optimization with the parameters of the iteration given by fmin and used in that iteration once the calendar optimization ends there is no need to discover them again extra this is for the simod optimization once all the stages end to evaluate the model against test log all these json parameters are discovered again from train validation
| 1
|
305,749
| 23,129,388,746
|
IssuesEvent
|
2022-07-28 09:00:47
|
SeleniumHQ/seleniumhq.github.io
|
https://api.github.com/repos/SeleniumHQ/seleniumhq.github.io
|
closed
|
[🐛 Bug]: Instructions for Install Browser Driver missing steps
|
bug documentation
|
### What happened?
Summary: When following 'Install Browser Driver' instructions, user is linked to a 'Downloads' page with no information about what to download or how.
Steps to reproduce:
1. As a new Selenium Webdriver user, start at the beginning of the the 'Getting Started' guide at https://www.selenium.dev/documentation/webdriver/getting_started/
2. After installing Selenium library, proceed to 'Install Browser Driver' step at https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
3. Read instructions and proceed to Downloads link for 'Chrome/Chromium' browser
Expected Result: Links to driver options with instructions for how to download them and which one to select if unsure (it seems we're supposed to download the one that matches our browser version - so say that?)
Actual Result: List of dozens of links, no instructions, and no way for the user to know what to do next. I intuitively tried the one at the bottom that said LATEST with the most recent time stamp and it took me to a page with no zip files, just text and a version number.
### What browsers and operating systems are you seeing the problem on?
Google Chrome - Version 98.0.4758.102 (Official Build) (arm64)
MacOS Monterey v12.1
|
1.0
|
[🐛 Bug]: Instructions for Install Browser Driver missing steps - ### What happened?
Summary: When following 'Install Browser Driver' instructions, user is linked to a 'Downloads' page with no information about what to download or how.
Steps to reproduce:
1. As a new Selenium Webdriver user, start at the beginning of the the 'Getting Started' guide at https://www.selenium.dev/documentation/webdriver/getting_started/
2. After installing Selenium library, proceed to 'Install Browser Driver' step at https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
3. Read instructions and proceed to Downloads link for 'Chrome/Chromium' browser
Expected Result: Links to driver options with instructions for how to download them and which one to select if unsure (it seems we're supposed to download the one that matches our browser version - so say that?)
Actual Result: List of dozens of links, no instructions, and no way for the user to know what to do next. I intuitively tried the one at the bottom that said LATEST with the most recent time stamp and it took me to a page with no zip files, just text and a version number.
### What browsers and operating systems are you seeing the problem on?
Google Chrome - Version 98.0.4758.102 (Official Build) (arm64)
MacOS Monterey v12.1
|
non_defect
|
instructions for install browser driver missing steps what happened summary when following install browser driver instructions user is linked to a downloads page with no information about what to download or how steps to reproduce as a new selenium webdriver user start at the beginning of the the getting started guide at after installing selenium library proceed to install browser driver step at read instructions and proceed to downloads link for chrome chromium browser expected result links to driver options with instructions for how to download them and which one to select if unsure it seems we re supposed to download the one that matches our browser version so say that actual result list of dozens of links no instructions and no way for the user to know what to do next i intuitively tried the one at the bottom that said latest with the most recent time stamp and it took me to a page with no zip files just text and a version number what browsers and operating systems are you seeing the problem on google chrome version official build macos monterey
| 0
|
237,383
| 19,621,044,864
|
IssuesEvent
|
2022-01-07 06:35:24
|
MohistMC/Mohist
|
https://api.github.com/repos/MohistMC/Mohist
|
closed
|
NetherPortalFix problem
|
1.12.2 More Info Needed Needs Testing Needs User Answer
|
NetherPortalFix is not working. It just uses the default Minecraft radius search algorithm instead of saving player portal position. Working fine on Forge ModLoader server.
|
1.0
|
NetherPortalFix problem - NetherPortalFix is not working. It just uses the default Minecraft radius search algorithm instead of saving player portal position. Working fine on Forge ModLoader server.
|
non_defect
|
netherportalfix problem netherportalfix is not working it just uses the default minecraft radius search algorithm instead of saving player portal position working fine on forge modloader server
| 0
|
58,336
| 16,488,280,188
|
IssuesEvent
|
2021-05-24 21:36:45
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
closed
|
broken documentation links
|
Manager: zOS Batch defect documentation
|
looking at https://galasa.dev/docs/managers/zos-manager
In particular the links within zosBatchManager.
`
Notes: | The IZosBatch interface has a single method, {@link IZosBatch#submitJob(String, IZosBatchJobname)} to submit a JCL as a String and returns a IZosBatchJob instance.See ZosBatch, IZosBatch and IZosBatchJob to find out more.
-- | --
`
I think that there a link that is not being rendered, also the links to zosBatch IZosBatch and IZosBatchName are broken
|
1.0
|
broken documentation links - looking at https://galasa.dev/docs/managers/zos-manager
In particular the links within zosBatchManager.
`
Notes: | The IZosBatch interface has a single method, {@link IZosBatch#submitJob(String, IZosBatchJobname)} to submit a JCL as a String and returns a IZosBatchJob instance.See ZosBatch, IZosBatch and IZosBatchJob to find out more.
-- | --
`
I think that there a link that is not being rendered, also the links to zosBatch IZosBatch and IZosBatchName are broken
|
defect
|
broken documentation links looking at in particular the links within zosbatchmanager notes the nbsp izosbatch nbsp interface has a single method link izosbatch submitjob string izosbatchjobname to submit a jcl as a nbsp string nbsp and returns a nbsp izosbatchjob nbsp instance see nbsp zosbatch nbsp izosbatch nbsp and nbsp izosbatchjob nbsp to find out more i think that there a link that is not being rendered also the links to zosbatch izosbatch and izosbatchname are broken
| 1
|
20,432
| 3,355,888,059
|
IssuesEvent
|
2015-11-18 18:11:55
|
jarz/slimtune
|
https://api.github.com/repos/jarz/slimtune
|
closed
|
Slimtune crashes when running XNA applications which use the content pipeline
|
auto-migrated Priority-Medium Type-Defect
|
```
>What steps will reproduce the problem?
1. Download the XNA winforms content pipline sample
(http://creators.xna.com/en-GB/sample/winforms_series2). Note this
requires instalation of XNA game studio.
2. Run the sample under SlimTune 0.1.5, open cats.fbx. The application
will then crash if running in the profiler.
>What is the expected output? What do you see instead?
Application should run and SlimTune should display profile results.
>What version of the product are you using? On what operating system?
SlimTune 0.1.5 and XNA Game Studio 3.0/3.1
>Please provide any additional information below.
I initially found this bug in my own app(which uses the content pipeline),
but the sample is the easiest way to reproduce the problem.
(I have also had problems with this sample and my own app when running
under PIX, the app was throwing an exception when processing a texture(XNA
seemed to incorrectly initialize D3D in order to use D3DX to process the
texture).
```
Original issue reported on code.google.com by `dbl...@fastmail.fm` on 18 Aug 2009 at 4:03
|
1.0
|
Slimtune crashes when running XNA applications which use the content pipeline - ```
>What steps will reproduce the problem?
1. Download the XNA winforms content pipline sample
(http://creators.xna.com/en-GB/sample/winforms_series2). Note this
requires instalation of XNA game studio.
2. Run the sample under SlimTune 0.1.5, open cats.fbx. The application
will then crash if running in the profiler.
>What is the expected output? What do you see instead?
Application should run and SlimTune should display profile results.
>What version of the product are you using? On what operating system?
SlimTune 0.1.5 and XNA Game Studio 3.0/3.1
>Please provide any additional information below.
I initially found this bug in my own app(which uses the content pipeline),
but the sample is the easiest way to reproduce the problem.
(I have also had problems with this sample and my own app when running
under PIX, the app was throwing an exception when processing a texture(XNA
seemed to incorrectly initialize D3D in order to use D3DX to process the
texture).
```
Original issue reported on code.google.com by `dbl...@fastmail.fm` on 18 Aug 2009 at 4:03
|
defect
|
slimtune crashes when running xna applications which use the content pipeline what steps will reproduce the problem download the xna winforms content pipline sample note this requires instalation of xna game studio run the sample under slimtune open cats fbx the application will then crash if running in the profiler what is the expected output what do you see instead application should run and slimtune should display profile results what version of the product are you using on what operating system slimtune and xna game studio please provide any additional information below i initially found this bug in my own app which uses the content pipeline but the sample is the easiest way to reproduce the problem i have also had problems with this sample and my own app when running under pix the app was throwing an exception when processing a texture xna seemed to incorrectly initialize in order to use to process the texture original issue reported on code google com by dbl fastmail fm on aug at
| 1
|
58,707
| 16,717,744,516
|
IssuesEvent
|
2021-06-10 00:40:51
|
Rise-Vision/rise-vision-apps
|
https://api.github.com/repos/Rise-Vision/rise-vision-apps
|
opened
|
[Subscription Details] Copy Change
|
visual defect
|
Issue: "Upgrade to Unlimited" button needs to read: "Upgrade To Unlimited".

|
1.0
|
[Subscription Details] Copy Change - Issue: "Upgrade to Unlimited" button needs to read: "Upgrade To Unlimited".

|
defect
|
copy change issue upgrade to unlimited button needs to read upgrade to unlimited
| 1
|
550,815
| 16,132,881,136
|
IssuesEvent
|
2021-04-29 08:03:37
|
exeGesIS-SDM/CAMS-Mobile
|
https://api.github.com/repos/exeGesIS-SDM/CAMS-Mobile
|
closed
|
Synced existing photos don't display on Land
|
Priority 1 v4.4.4 v4.4.5
|
Regarding the downloading of existing photos to the device (https://github.com/exeGesIS-SDM/CAMS-Mobile/issues/20), Land records aren't currently showing the downloaded photos (they need to)
|
1.0
|
Synced existing photos don't display on Land - Regarding the downloading of existing photos to the device (https://github.com/exeGesIS-SDM/CAMS-Mobile/issues/20), Land records aren't currently showing the downloaded photos (they need to)
|
non_defect
|
synced existing photos don t display on land regarding the downloading of existing photos to the device land records aren t currently showing the downloaded photos they need to
| 0
|
46,055
| 13,055,845,823
|
IssuesEvent
|
2020-07-30 02:54:35
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
DeleteUnregistered can't delete old versions of I3TrayInfo objects (Trac #518)
|
IceTray Incomplete Migration Migrated from Trac defect
|
Migrated from https://code.icecube.wisc.edu/ticket/518
```json
{
"status": "closed",
"changetime": "2009-01-16T15:40:48",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1232120448000000",
"component": "IceTray",
"summary": "DeleteUnregistered can't delete old versions of I3TrayInfo objects",
"priority": "normal",
"keywords": "",
"time": "2009-01-16T14:06:44",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
1.0
|
DeleteUnregistered can't delete old versions of I3TrayInfo objects (Trac #518) - Migrated from https://code.icecube.wisc.edu/ticket/518
```json
{
"status": "closed",
"changetime": "2009-01-16T15:40:48",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1232120448000000",
"component": "IceTray",
"summary": "DeleteUnregistered can't delete old versions of I3TrayInfo objects",
"priority": "normal",
"keywords": "",
"time": "2009-01-16T14:06:44",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
defect
|
deleteunregistered can t delete old versions of objects trac migrated from json status closed changetime description reporter troy cc resolution fixed ts component icetray summary deleteunregistered can t delete old versions of objects priority normal keywords time milestone owner troy type defect
| 1
|
74,203
| 25,007,925,186
|
IssuesEvent
|
2022-11-03 13:19:15
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Text wobbles around as I type
|
T-Defect
|
### Steps to reproduce
1. In a room, with my cursor in either the existing composer or the new "rich" composer
2. Type a bit and notice the fonts wobble around a bit sometimes
3. Specific example - if I type a capital letter after ` - `, the dash moves upwards a bit, but if I type a lower case followed by a capital, it does not

### Outcome
#### What did you expect?
I expected the characters I had already typed to be unaffected by later characters (except where I was doing some clever input methods thing).
#### What happened instead?
Characters I have already typed sometimes wobble a bit.
### Operating system
Ubuntu 22.04
### Browser information
Firefox 106.0.2
### URL for webapp
https://develop.element.io
### Application version
Element version: 9c302f303aee-react-5f540eb25c31-js-db49cd8d1395 Olm version: 3.2.12
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Text wobbles around as I type - ### Steps to reproduce
1. In a room, with my cursor in either the existing composer or the new "rich" composer
2. Type a bit and notice the fonts wobble around a bit sometimes
3. Specific example - if I type a capital letter after ` - `, the dash moves upwards a bit, but if I type a lower case followed by a capital, it does not

### Outcome
#### What did you expect?
I expected the characters I had already typed to be unaffected by later characters (except where I was doing some clever input methods thing).
#### What happened instead?
Characters I have already typed sometimes wobble a bit.
### Operating system
Ubuntu 22.04
### Browser information
Firefox 106.0.2
### URL for webapp
https://develop.element.io
### Application version
Element version: 9c302f303aee-react-5f540eb25c31-js-db49cd8d1395 Olm version: 3.2.12
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
text wobbles around as i type steps to reproduce in a room with my cursor in either the existing composer or the new rich composer type a bit and notice the fonts wobble around a bit sometimes specific example if i type a capital letter after the dash moves upwards a bit but if i type a lower case followed by a capital it does not outcome what did you expect i expected the characters i had already typed to be unaffected by later characters except where i was doing some clever input methods thing what happened instead characters i have already typed sometimes wobble a bit operating system ubuntu browser information firefox url for webapp application version element version react js olm version homeserver matrix org will you send logs no
| 1
|
45,259
| 12,687,555,590
|
IssuesEvent
|
2020-06-20 17:06:20
|
hikaya-io/dots-frontend
|
https://api.github.com/repos/hikaya-io/dots-frontend
|
opened
|
Forgot password: add link to go back to login page
|
FE General app defect
|
**Is your feature request related to a problem? Please describe.**
On forgot password page, add a link to go back to login page
**Acceptance Criteria**
```
GIVEN I am on the Forgot password page
AND I realize I came to the page by mistake and want to go back to Log in page
AND I see a link to go back to Login page
WHEN I click on the 'Login' link
THEN I am directed to the login page
```
**Additional context**

|
1.0
|
Forgot password: add link to go back to login page - **Is your feature request related to a problem? Please describe.**
On forgot password page, add a link to go back to login page
**Acceptance Criteria**
```
GIVEN I am on the Forgot password page
AND I realize I came to the page by mistake and want to go back to Log in page
AND I see a link to go back to Login page
WHEN I click on the 'Login' link
THEN I am directed to the login page
```
**Additional context**

|
defect
|
forgot password add link to go back to login page is your feature request related to a problem please describe on forgot password page add a link to go back to login page acceptance criteria given i am on the forgot password page and i realize i came to the page by mistake and want to go back to log in page and i see a link to go back to login page when i click on the login link then i am directed to the login page additional context
| 1
|
158,322
| 13,728,607,954
|
IssuesEvent
|
2020-10-04 12:28:11
|
stLmpp/st-store
|
https://api.github.com/repos/stLmpp/st-store
|
opened
|
Add documentation
|
documentation
|
> Store
- [ ] EntityStore
- [ ] EntityQuery
- [ ] Store
- [ ] Query
- [ ] StMap
- [ ] RxJS Operators
- [ ] StStoreModule
> Utils
- [ ] Array Helpers
- [ ] DefaultPipe
- [ ] GetDeepPipe
- [ ] GroupByPipe
- [ ] OrderByPipe
- [ ] SumPipe
- [ ] SumByPipe
- [ ] TrackByFactories
> Router
- [ ] RouterQuery
|
1.0
|
Add documentation - > Store
- [ ] EntityStore
- [ ] EntityQuery
- [ ] Store
- [ ] Query
- [ ] StMap
- [ ] RxJS Operators
- [ ] StStoreModule
> Utils
- [ ] Array Helpers
- [ ] DefaultPipe
- [ ] GetDeepPipe
- [ ] GroupByPipe
- [ ] OrderByPipe
- [ ] SumPipe
- [ ] SumByPipe
- [ ] TrackByFactories
> Router
- [ ] RouterQuery
|
non_defect
|
add documentation store entitystore entityquery store query stmap rxjs operators ststoremodule utils array helpers defaultpipe getdeeppipe groupbypipe orderbypipe sumpipe sumbypipe trackbyfactories router routerquery
| 0
|
56,552
| 15,173,011,192
|
IssuesEvent
|
2021-02-13 12:07:40
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Possible uncaught exception causing hpx::parallel::for_loop lockup
|
category: algorithms tag: wontfix type: defect
|
## Expected Behavior
The exception to be reported / caught / propagated
## Actual Behavior
hpx::parallel::for_loop locks up and never returns
## Steps to Reproduce the Problem
throw at this location:
https://github.com/STEllAR-GROUP/hpx/blob/master/libs/algorithms/include/hpx/parallel/util/detail/handle_local_exceptions.hpp#L109
## Specifications
- HPX Version: stable as of today
- Platform (compiler, OS): Windows 10, MSVC 2019 latest
I made an exhaustive log to catch this error, which can be found here:
https://gist.github.com/McKillroy/5f92e41f5c851d28408ca447e7dc8f09
Scroll to the end to see the sequence of the last steps in the task before it locked up.
|
1.0
|
Possible uncaught exception causing hpx::parallel::for_loop lockup - ## Expected Behavior
The exception to be reported / caught / propagated
## Actual Behavior
hpx::parallel::for_loop locks up and never returns
## Steps to Reproduce the Problem
throw at this location:
https://github.com/STEllAR-GROUP/hpx/blob/master/libs/algorithms/include/hpx/parallel/util/detail/handle_local_exceptions.hpp#L109
## Specifications
- HPX Version: stable as of today
- Platform (compiler, OS): Windows 10, MSVC 2019 latest
I made an exhaustive log to catch this error, which can be found here:
https://gist.github.com/McKillroy/5f92e41f5c851d28408ca447e7dc8f09
Scroll to the end to see the sequence of the last steps in the task before it locked up.
|
defect
|
possible uncaught exception causing hpx parallel for loop lockup expected behavior the exception to be reported caught propagated actual behavior hpx parallel for loop locks up and never returns steps to reproduce the problem throw at this location specifications hpx version stable as of today platform compiler os windows msvc latest i made an exhaustive log to catch this error which can be found here scroll to the end to see the sequence of the last steps in the task before it locked up
| 1
|
170,124
| 26,905,676,110
|
IssuesEvent
|
2023-02-06 18:52:37
|
webb-tools/webb-experiences
|
https://api.github.com/repos/webb-tools/webb-experiences
|
closed
|
Transfer Component
|
design 🎨
|
## Product Design Goals
- Enable users to easily transfer shielded funds from one registered address to another
## Deliverables
- Create wireframes that represent and organize the user actions for making a transfer to another user
- Design a dedicated transfer component that displays required input boxes, success modal, and notifications
- Create high fidelity prototype of a transfer component and flow
## Designs and Product Flow

**Figure 1.1 -** *Transfer UI flow.*
### User Flow Transfer
1. Login with registered Note Account
2. Select token asset to transfer
1. These token types will only be webb wrapped assets (e.g. webbUSDC, webbETH)
3. Enter an amount
4. Input other registered addresses
1. Validate address is registered within input box
5. Select relayer
1. Same component used on bridge
6. Send transfer
**Note:** The shielded balance available will be informed by the current connected chain. For example, a user makes two deposits into the bridge for USDC, ETH where the destination chain selected was Arbitrum. When the user navigates to the Transfer tab, and is connected to the Arbitrum chain, the available shielded balance to transfer will consist of the webbUSDC, and webbETH previously deposited. However, if they are connected to Optimism and have not deposited anything into the bridge where Optimism is the destination chain the available shielded balance will be 0.
## User Selection Inputs
1. Token type
2. Amount
3. Registered address
4. Relayer selection
## Notifications
- Failed transfer
- Successful transfer
- Invalid recipient address
## Components List
- Transfer UI interface for above mentioned inputs
- Successful / unsuccessful indicator
## Alternative Transfer UI’s
<img src="https://user-images.githubusercontent.com/29983536/191096998-a4fa28bd-51cb-4667-81b2-e35cca12fd6a.png" height=450 />
## Future Feature Considerations
1. Contact / address book for pre-saved registered addresses
2. Notification informing recipient of transferred funds
1. Currently we do not have anything in place that informs the user that they received funds via transfer
# Open Questions
|
1.0
|
Transfer Component - ## Product Design Goals
- Enable users to easily transfer shielded funds from one registered address to another
## Deliverables
- Create wireframes that represent and organize the user actions for making a transfer to another user
- Design a dedicated transfer component that displays required input boxes, success modal, and notifications
- Create high fidelity prototype of a transfer component and flow
## Designs and Product Flow

**Figure 1.1 -** *Transfer UI flow.*
### User Flow Transfer
1. Login with registered Note Account
2. Select token asset to transfer
1. These token types will only be webb wrapped assets (e.g. webbUSDC, webbETH)
3. Enter an amount
4. Input other registered addresses
1. Validate address is registered within input box
5. Select relayer
1. Same component used on bridge
6. Send transfer
**Note:** The shielded balance available will be informed by the current connected chain. For example, a user makes two deposits into the bridge for USDC, ETH where the destination chain selected was Arbitrum. When the user navigates to the Transfer tab, and is connected to the Arbitrum chain, the available shielded balance to transfer will consist of the webbUSDC, and webbETH previously deposited. However, if they are connected to Optimism and have not deposited anything into the bridge where Optimism is the destination chain the available shielded balance will be 0.
## User Selection Inputs
1. Token type
2. Amount
3. Registered address
4. Relayer selection
## Notifications
- Failed transfer
- Successful transfer
- Invalid recipient address
## Components List
- Transfer UI interface for above mentioned inputs
- Successful / unsuccessful indicator
## Alternative Transfer UI’s
<img src="https://user-images.githubusercontent.com/29983536/191096998-a4fa28bd-51cb-4667-81b2-e35cca12fd6a.png" height=450 />
## Future Feature Considerations
1. Contact / address book for pre-saved registered addresses
2. Notification informing recipient of transferred funds
1. Currently we do not have anything in place that informs the user that they received funds via transfer
# Open Questions
|
non_defect
|
transfer component product design goals enable users to easily transfer shielded funds from one registered address to another deliverables create wireframes that represent and organize the user actions for making a transfer to another user design a dedicated transfer component that displays required input boxes success modal and notifications create high fidelity prototype of a transfer component and flow designs and product flow figure transfer ui flow user flow transfer login with registered note account select token asset to transfer these token types will only be webb wrapped assets e g webbusdc webbeth enter an amount input other registered addresses validate address is registered within input box select relayer same component used on bridge send transfer note the shielded balance available will be informed by the current connected chain for example a user makes two deposits into the bridge for usdc eth where the destination chain selected was arbitrum when the user navigates to the transfer tab and is connected to the arbitrum chain the available shielded balance to transfer will consist of the webbusdc and webbeth previously deposited however if they are connected to optimism and have not deposited anything into the bridge where optimism is the destination chain the available shielded balance will be user selection inputs token type amount registered address relayer selection notifications failed transfer successful transfer invalid recipient address components list transfer ui interface for above mentioned inputs successful unsuccessful indicator alternative transfer ui’s future feature considerations contact address book for pre saved registered addresses notification informing recipient of transferred funds currently we do not have anything in place that informs the user that they received funds via transfer open questions
| 0
|
79,753
| 28,807,890,633
|
IssuesEvent
|
2023-05-03 00:19:34
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: Selenium Manager does not get the version from Chrome Beta binary
|
I-defect C-rust
|
### What happened?
Was trying to do a demo showing how Selenium Manger gets the correct ChromeDriver for Chrome beta, but it seems it was not able to parse the version returned by Chrome beta. Check below.
### How can we reproduce the issue?
```shell
> /Applications/Google\ Chrome\ Beta.app/Contents/MacOS/Google\ Chrome\ Beta --version
Google Chrome 113.0.5672.63 beta
```
### Relevant log output
```shell
./common/manager/macos/selenium-manager --browser chrome --browser-path "/Applications/Google Chrome Beta.app/Contents/MacOS/Google Chrome Beta" --output json --debug
{
"logs": [
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Using shell command to find out chrome version"
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Running command: \"/Applications/Google Chrome Beta.app/Contents/MacOS/Google Chrome Beta --version\""
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Output: \"\""
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "The version of chrome cannot be detected. Trying with latest driver version"
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Reading chromedriver version from https://chromedriver.storage.googleapis.com/LATEST_RELEASE"
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Required driver: chromedriver 112.0.5615.49"
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Running command: \"chromedriver --version\""
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Output: \"\""
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "chromedriver 112.0.5615.49 already in the cache"
},
{
"level": "INFO",
"timestamp": 1683058592,
"message": "/Users/diegomolina/.cache/selenium/chromedriver/mac-arm64/112.0.5615.49/chromedriver"
}
],
"result": {
"code": 0,
"message": "/Users/diegomolina/.cache/selenium/chromedriver/mac-arm64/112.0.5615.49/chromedriver"
}
}
```
### Operating System
macOS (most likely all of them)
### Selenium version
Java 4.9.0
### What are the browser(s) and version(s) where you see this issue?
Chrome beta
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver
### Are you using Selenium Grid?
No
|
1.0
|
[🐛 Bug]: Selenium Manager does not get the version from Chrome Beta binary - ### What happened?
Was trying to do a demo showing how Selenium Manger gets the correct ChromeDriver for Chrome beta, but it seems it was not able to parse the version returned by Chrome beta. Check below.
### How can we reproduce the issue?
```shell
> /Applications/Google\ Chrome\ Beta.app/Contents/MacOS/Google\ Chrome\ Beta --version
Google Chrome 113.0.5672.63 beta
```
### Relevant log output
```shell
./common/manager/macos/selenium-manager --browser chrome --browser-path "/Applications/Google Chrome Beta.app/Contents/MacOS/Google Chrome Beta" --output json --debug
{
"logs": [
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Using shell command to find out chrome version"
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Running command: \"/Applications/Google Chrome Beta.app/Contents/MacOS/Google Chrome Beta --version\""
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Output: \"\""
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "The version of chrome cannot be detected. Trying with latest driver version"
},
{
"level": "DEBUG",
"timestamp": 1683058591,
"message": "Reading chromedriver version from https://chromedriver.storage.googleapis.com/LATEST_RELEASE"
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Required driver: chromedriver 112.0.5615.49"
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Running command: \"chromedriver --version\""
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "Output: \"\""
},
{
"level": "DEBUG",
"timestamp": 1683058592,
"message": "chromedriver 112.0.5615.49 already in the cache"
},
{
"level": "INFO",
"timestamp": 1683058592,
"message": "/Users/diegomolina/.cache/selenium/chromedriver/mac-arm64/112.0.5615.49/chromedriver"
}
],
"result": {
"code": 0,
"message": "/Users/diegomolina/.cache/selenium/chromedriver/mac-arm64/112.0.5615.49/chromedriver"
}
}
```
### Operating System
macOS (most likely all of them)
### Selenium version
Java 4.9.0
### What are the browser(s) and version(s) where you see this issue?
Chrome beta
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver
### Are you using Selenium Grid?
No
|
defect
|
selenium manager does not get the version from chrome beta binary what happened was trying to do a demo showing how selenium manger gets the correct chromedriver for chrome beta but it seems it was not able to parse the version returned by chrome beta check below how can we reproduce the issue shell applications google chrome beta app contents macos google chrome beta version google chrome beta relevant log output shell common manager macos selenium manager browser chrome browser path applications google chrome beta app contents macos google chrome beta output json debug logs level debug timestamp message using shell command to find out chrome version level debug timestamp message running command applications google chrome beta app contents macos google chrome beta version level debug timestamp message output level debug timestamp message the version of chrome cannot be detected trying with latest driver version level debug timestamp message reading chromedriver version from level debug timestamp message required driver chromedriver level debug timestamp message running command chromedriver version level debug timestamp message output level debug timestamp message chromedriver already in the cache level info timestamp message users diegomolina cache selenium chromedriver mac chromedriver result code message users diegomolina cache selenium chromedriver mac chromedriver operating system macos most likely all of them selenium version java what are the browser s and version s where you see this issue chrome beta what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no
| 1
|
46,731
| 11,880,330,515
|
IssuesEvent
|
2020-03-27 10:29:01
|
BatchDrake/SigDigger
|
https://api.github.com/repos/BatchDrake/SigDigger
|
closed
|
compile error in macOS
|
build-issue
|
There's a new mismatch between two operands in the recent changes in develop:
`CarrierDetector.cpp` line 133:
`acc += psd * SU_C_EXP(I * M_PI * nFreq);`
`I` is std::complex. Changing it to something like
`acc += psd * SU_C_EXP(SU_C_REAL(I) * M_PI * nFreq);`
fixes
|
1.0
|
compile error in macOS - There's a new mismatch between two operands in the recent changes in develop:
`CarrierDetector.cpp` line 133:
`acc += psd * SU_C_EXP(I * M_PI * nFreq);`
`I` is std::complex. Changing it to something like
`acc += psd * SU_C_EXP(SU_C_REAL(I) * M_PI * nFreq);`
fixes
|
non_defect
|
compile error in macos there s a new mismatch between two operands in the recent changes in develop carrierdetector cpp line acc psd su c exp i m pi nfreq i is std complex changing it to something like acc psd su c exp su c real i m pi nfreq fixes
| 0
|
26,707
| 4,777,616,034
|
IssuesEvent
|
2016-10-27 16:47:20
|
wheeler-microfluidics/microdrop
|
https://api.github.com/repos/wheeler-microfluidics/microdrop
|
closed
|
Maximum recursion limit exceeded when running protocol (Trac #38)
|
defect microdrop Migrated from Trac
|
Running a protocol with 16 steps, repeated 100 times results the following:
RuntimeError: maximum recursion depth exceeded
Migrated from http://microfluidics.utoronto.ca/ticket/38
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Running a protocol with 16 steps, repeated 100 times results the following:\n\nRuntimeError: maximum recursion depth exceeded",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Maximum recursion limit exceeded when running protocol",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-24T20:04:35",
"milestone": "Microdrop 1.0",
"owner": "ryan",
"type": "defect"
}
```
|
1.0
|
Maximum recursion limit exceeded when running protocol (Trac #38) - Running a protocol with 16 steps, repeated 100 times results the following:
RuntimeError: maximum recursion depth exceeded
Migrated from http://microfluidics.utoronto.ca/ticket/38
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Running a protocol with 16 steps, repeated 100 times results the following:\n\nRuntimeError: maximum recursion depth exceeded",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Maximum recursion limit exceeded when running protocol",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-24T20:04:35",
"milestone": "Microdrop 1.0",
"owner": "ryan",
"type": "defect"
}
```
|
defect
|
maximum recursion limit exceeded when running protocol trac running a protocol with steps repeated times results the following runtimeerror maximum recursion depth exceeded migrated from json status closed changetime description running a protocol with steps repeated times results the following n nruntimeerror maximum recursion depth exceeded reporter ryan cc resolution fixed ts component microdrop summary maximum recursion limit exceeded when running protocol priority major keywords version time milestone microdrop owner ryan type defect
| 1
|
49,415
| 7,498,643,945
|
IssuesEvent
|
2018-04-09 06:01:07
|
raquo/scala-dom-types
|
https://api.github.com/repos/raquo/scala-dom-types
|
opened
|
Docs: Add MDN docs to SVG attributes
|
documentation
|
Copy over MDN docs from ScalaTags to SVG attributes that don't have them, similar to other attributes.
@doofin please let me know if you will be working on this. If not, I will probably release v0.6 without this. I got SDB and Laminar parts ready.
|
1.0
|
Docs: Add MDN docs to SVG attributes - Copy over MDN docs from ScalaTags to SVG attributes that don't have them, similar to other attributes.
@doofin please let me know if you will be working on this. If not, I will probably release v0.6 without this. I got SDB and Laminar parts ready.
|
non_defect
|
docs add mdn docs to svg attributes copy over mdn docs from scalatags to svg attributes that don t have them similar to other attributes doofin please let me know if you will be working on this if not i will probably release without this i got sdb and laminar parts ready
| 0
|
58,616
| 16,653,358,978
|
IssuesEvent
|
2021-06-05 04:01:40
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Can't p2p 1:1 call if turnserver is disabled
|
A-VoIP T-Defect X-Needs-Info
|
Not sure if this is intended or not, but with the
```
Allow Peer-to-Peer for 1:1 calls (if you enable this, the other party might be able to see your IP address)
```
option enabled, it still requires the server's turnserver to be enabled. I thought that if both parties have this enabled, it would be a p2p e2ee call?
Element version: 1.7.29
olm version: 3.2.3
|
1.0
|
Can't p2p 1:1 call if turnserver is disabled - Not sure if this is intended or not, but with the
```
Allow Peer-to-Peer for 1:1 calls (if you enable this, the other party might be able to see your IP address)
```
option enabled, it still requires the server's turnserver to be enabled. I thought that if both parties have this enabled, it would be a p2p e2ee call?
Element version: 1.7.29
olm version: 3.2.3
|
defect
|
can t call if turnserver is disabled not sure if this is intended or not but with the allow peer to peer for calls if you enable this the other party might be able to see your ip address option enabled it still requires the server s turnserver to be enabled i thought that if both parties have this enabled it would be a call element version olm version
| 1
|
7,040
| 2,610,323,493
|
IssuesEvent
|
2015-02-26 19:44:17
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Text
|
auto-migrated Priority-Medium Type-Defect
|
```
ARC Gunship
Its "Deploy ARC" Ability shows MISSING both for name and description!
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 14 May 2011 at 2:07
|
1.0
|
Text - ```
ARC Gunship
Its "Deploy ARC" Ability shows MISSING both for name and description!
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 14 May 2011 at 2:07
|
defect
|
text arc gunship its deploy arc ability shows missing both for name and description original issue reported on code google com by gmail com on may at
| 1
|
333,605
| 29,796,821,284
|
IssuesEvent
|
2023-06-16 03:39:31
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
reopened
|
DISABLED test_operator_linalg_lu_factor_cuda_float32 (__main__.TestCompositeComplianceCUDA)
|
triaged module: flaky-tests skipped module: unknown
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_operator_linalg_lu_factor_cuda_float32&suite=TestCompositeComplianceCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10425574725).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 3 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_operator_linalg_lu_factor_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
|
1.0
|
DISABLED test_operator_linalg_lu_factor_cuda_float32 (__main__.TestCompositeComplianceCUDA) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_operator_linalg_lu_factor_cuda_float32&suite=TestCompositeComplianceCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10425574725).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 3 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_operator_linalg_lu_factor_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
|
non_defect
|
disabled test operator linalg lu factor cuda main testcompositecompliancecuda platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not be alarmed if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test operator linalg lu factor cuda there should be several instances run as flaky tests are rerun in ci from which you can study the logs
| 0
|
68,851
| 21,927,447,922
|
IssuesEvent
|
2022-05-23 06:31:08
|
FreeRADIUS/freeradius-server
|
https://api.github.com/repos/FreeRADIUS/freeradius-server
|
opened
|
[defect]: Segfault / "talloc abort: Bad talloc magic value - unknown value
|
defect
|
### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
This error is occurred when using Windows 10 as a Client while establishing EAP-TLS wireless connection with wrong ECC certificates. We are trying to connect from wireless and after 4th try, radius crash is occured.
Error is not occurred (we did not observed yet) when decreasing the max_response_time from default 30 to 15 in radiusd.conf file. If we set 20, we still come across the issue. But it takes 2 hours to crash freeradius.
### Log output from the FreeRADIUS daemon
```shell
Fri May 20 16:40:22 2022 : Debug: (TLS) Ignoring cbtls_msg call with pseudo content type 256, version 0
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) send TLS 1.2 Handshake, ServerHelloDone
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) Handshake state [TWSD] - Server SSLv3/TLS write server done (26)
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) Server : Need to read more data: SSLv3/TLS write server done
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) In Handshake Phase
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) got 902 bytes of data
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: [eaptls process] = handled
Fri May 20 16:40:22 2022 : Debug: (10) eap: Sending EAP Request (code 1) ID 62 length 912
Fri May 20 16:40:22 2022 : Debug: (10) eap: EAP session adding &reply:State = 0xdfa9addfde97a014
Fri May 20 16:40:22 2022 : Debug: (10) modsingle[authenticate]: returned from eap (rlm_eap)
Fri May 20 16:40:22 2022 : Debug: (10) [eap] = handled
Fri May 20 16:40:22 2022 : Debug: (10) } # authenticate = handled
Fri May 20 16:40:22 2022 : Debug: (10) Using Post-Auth-Type Challenge
Fri May 20 16:40:22 2022 : Debug: (10) # Executing group from file /opt/freeradius/etc/raddb/sites-enabled/default
Fri May 20 16:40:22 2022 : Debug: (10) Challenge { ... } # empty sub-section is ignored
Fri May 20 16:40:22 2022 : Debug: (10) session-state: Saving cached attributes
Fri May 20 16:40:22 2022 : Debug: (10) Framed-MTU = 1014
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) recv TLS 1.3 Handshake, ClientHello\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerHello\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, Certificate\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerKeyExchange\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, CertificateRequest\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerHelloDone\n"
Bad talloc magic value - unknown value
talloc abort: Bad talloc magic value - unknown value
```
### Relevant log output from client utilities
_No response_
### Backtrace from LLDB or GDB
```shell
Starting program: radiusd -XX
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff6f5a859 in __GI_abort () at abort.c:79
#2 0x00007ffff7b5aaaa in _fr_talloc_fault_simple (reason=<optimized out>) at src/lib/debug.c:848
#3 0x00007ffff71a3cbf in talloc_abort () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#4 0x00007ffff71a3ce5 in talloc_abort_unknown_value () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#5 0x00007ffff71a3d63 in talloc_chunk_from_ptr () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#6 0x00007ffff71a617f in _talloc_free () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#7 0x000055555542e99a in state_entry_free (entry=<optimized out>, state=0x555555677940 <global_state>)
at src/main/state.c:168
#8 0x000055555542ef76 in state_entry_free (entry=0x5555560d0f30, state=0x555555677940 <global_state>)
at src/main/state.c:155
#9 fr_state_cleanup_find (state=0x555555677940 <global_state>) at src/main/state.c:310
#10 fr_state_put_vps (request=0x55555605f980, original=0x55555605f6a0, packet=0x55555605fb30) at src/main/state.c:671
#11 0x00005555554159ed in rad_postauth (request=request@entry=0x55555605f980) at src/main/auth.c:373
#12 0x000055555543b820 in request_finish (request=0x55555605f980, action=1) at src/main/process.c:1425
#13 0x0000555555438890 in request_queue_or_run (request=request@entry=0x55555605f980,
process=process@entry=0x55555543c0b0 <request_running>) at src/main/process.c:1106
#14 0x000055555543ac97 in request_receive (ctx=ctx@entry=0x55555605f640, listener=listener@entry=0x555555b59850,
packet=<optimized out>, client=client@entry=0x55555607a790, fun=fun@entry=0x555555415a40 <rad_authenticate>)
at src/main/process.c:1930
#15 0x0000555555421cbe in auth_socket_recv (listener=0x555555b59850) at src/main/listen.c:1637
#16 0x0000555555435bee in event_socket_handler (xel=<optimized out>, fd=<optimized out>, ctx=<optimized out>)
at src/main/process.c:4971
#17 0x00007ffff7b77f9f in fr_event_loop (el=0x555555934660) at src/lib/event.c:649
```
|
1.0
|
[defect]: Segfault / "talloc abort: Bad talloc magic value - unknown value - ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
This error is occurred when using Windows 10 as a Client while establishing EAP-TLS wireless connection with wrong ECC certificates. We are trying to connect from wireless and after 4th try, radius crash is occured.
Error is not occurred (we did not observed yet) when decreasing the max_response_time from default 30 to 15 in radiusd.conf file. If we set 20, we still come across the issue. But it takes 2 hours to crash freeradius.
### Log output from the FreeRADIUS daemon
```shell
Fri May 20 16:40:22 2022 : Debug: (TLS) Ignoring cbtls_msg call with pseudo content type 256, version 0
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) send TLS 1.2 Handshake, ServerHelloDone
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) Handshake state [TWSD] - Server SSLv3/TLS write server done (26)
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) Server : Need to read more data: SSLv3/TLS write server done
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) In Handshake Phase
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: (TLS) got 902 bytes of data
Fri May 20 16:40:22 2022 : Debug: (10) eap_tls: [eaptls process] = handled
Fri May 20 16:40:22 2022 : Debug: (10) eap: Sending EAP Request (code 1) ID 62 length 912
Fri May 20 16:40:22 2022 : Debug: (10) eap: EAP session adding &reply:State = 0xdfa9addfde97a014
Fri May 20 16:40:22 2022 : Debug: (10) modsingle[authenticate]: returned from eap (rlm_eap)
Fri May 20 16:40:22 2022 : Debug: (10) [eap] = handled
Fri May 20 16:40:22 2022 : Debug: (10) } # authenticate = handled
Fri May 20 16:40:22 2022 : Debug: (10) Using Post-Auth-Type Challenge
Fri May 20 16:40:22 2022 : Debug: (10) # Executing group from file /opt/freeradius/etc/raddb/sites-enabled/default
Fri May 20 16:40:22 2022 : Debug: (10) Challenge { ... } # empty sub-section is ignored
Fri May 20 16:40:22 2022 : Debug: (10) session-state: Saving cached attributes
Fri May 20 16:40:22 2022 : Debug: (10) Framed-MTU = 1014
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) recv TLS 1.3 Handshake, ClientHello\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerHello\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, Certificate\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerKeyExchange\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, CertificateRequest\n"
Fri May 20 16:40:22 2022 : Debug: (10) TLS-Session-Information = "(TLS) send TLS 1.2 Handshake, ServerHelloDone\n"
Bad talloc magic value - unknown value
talloc abort: Bad talloc magic value - unknown value
```
### Relevant log output from client utilities
_No response_
### Backtrace from LLDB or GDB
```shell
Starting program: radiusd -XX
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff6f5a859 in __GI_abort () at abort.c:79
#2 0x00007ffff7b5aaaa in _fr_talloc_fault_simple (reason=<optimized out>) at src/lib/debug.c:848
#3 0x00007ffff71a3cbf in talloc_abort () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#4 0x00007ffff71a3ce5 in talloc_abort_unknown_value () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#5 0x00007ffff71a3d63 in talloc_chunk_from_ptr () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#6 0x00007ffff71a617f in _talloc_free () from /usr/lib/x86_64-linux-gnu/libtalloc.so.2
#7 0x000055555542e99a in state_entry_free (entry=<optimized out>, state=0x555555677940 <global_state>)
at src/main/state.c:168
#8 0x000055555542ef76 in state_entry_free (entry=0x5555560d0f30, state=0x555555677940 <global_state>)
at src/main/state.c:155
#9 fr_state_cleanup_find (state=0x555555677940 <global_state>) at src/main/state.c:310
#10 fr_state_put_vps (request=0x55555605f980, original=0x55555605f6a0, packet=0x55555605fb30) at src/main/state.c:671
#11 0x00005555554159ed in rad_postauth (request=request@entry=0x55555605f980) at src/main/auth.c:373
#12 0x000055555543b820 in request_finish (request=0x55555605f980, action=1) at src/main/process.c:1425
#13 0x0000555555438890 in request_queue_or_run (request=request@entry=0x55555605f980,
process=process@entry=0x55555543c0b0 <request_running>) at src/main/process.c:1106
#14 0x000055555543ac97 in request_receive (ctx=ctx@entry=0x55555605f640, listener=listener@entry=0x555555b59850,
packet=<optimized out>, client=client@entry=0x55555607a790, fun=fun@entry=0x555555415a40 <rad_authenticate>)
at src/main/process.c:1930
#15 0x0000555555421cbe in auth_socket_recv (listener=0x555555b59850) at src/main/listen.c:1637
#16 0x0000555555435bee in event_socket_handler (xel=<optimized out>, fd=<optimized out>, ctx=<optimized out>)
at src/main/process.c:4971
#17 0x00007ffff7b77f9f in fr_event_loop (el=0x555555934660) at src/lib/event.c:649
```
|
defect
|
segfault talloc abort bad talloc magic value unknown value what type of defect bug is this crash or memory corruption segv abort etc how can the issue be reproduced this error is occurred when using windows as a client while establishing eap tls wireless connection with wrong ecc certificates we are trying to connect from wireless and after try radius crash is occured error is not occurred we did not observed yet when decreasing the max response time from default to in radiusd conf file if we set we still come across the issue but it takes hours to crash freeradius log output from the freeradius daemon shell fri may debug tls ignoring cbtls msg call with pseudo content type version fri may debug eap tls tls send tls handshake serverhellodone fri may debug eap tls tls handshake state server tls write server done fri may debug eap tls tls server need to read more data tls write server done fri may debug eap tls tls in handshake phase fri may debug eap tls tls got bytes of data fri may debug eap tls handled fri may debug eap sending eap request code id length fri may debug eap eap session adding reply state fri may debug modsingle returned from eap rlm eap fri may debug handled fri may debug authenticate handled fri may debug using post auth type challenge fri may debug executing group from file opt freeradius etc raddb sites enabled default fri may debug challenge empty sub section is ignored fri may debug session state saving cached attributes fri may debug framed mtu fri may debug tls session information tls recv tls handshake clienthello n fri may debug tls session information tls send tls handshake serverhello n fri may debug tls session information tls send tls handshake certificate n fri may debug tls session information tls send tls handshake serverkeyexchange n fri may debug tls session information tls send tls handshake certificaterequest n fri may debug tls session information tls send tls handshake serverhellodone n bad talloc magic value unknown value talloc abort bad talloc magic value unknown value relevant log output from client utilities no response backtrace from lldb or gdb shell starting program radiusd xx using host libthread db library lib linux gnu libthread db so program received signal sigabrt aborted gi raise sig sig entry at sysdeps unix sysv linux raise c sysdeps unix sysv linux raise c no such file or directory gi raise sig sig entry at sysdeps unix sysv linux raise c in gi abort at abort c in fr talloc fault simple reason at src lib debug c in talloc abort from usr lib linux gnu libtalloc so in talloc abort unknown value from usr lib linux gnu libtalloc so in talloc chunk from ptr from usr lib linux gnu libtalloc so in talloc free from usr lib linux gnu libtalloc so in state entry free entry state at src main state c in state entry free entry state at src main state c fr state cleanup find state at src main state c fr state put vps request original packet at src main state c in rad postauth request request entry at src main auth c in request finish request action at src main process c in request queue or run request request entry process process entry at src main process c in request receive ctx ctx entry listener listener entry packet client client entry fun fun entry at src main process c in auth socket recv listener at src main listen c in event socket handler xel fd ctx at src main process c in fr event loop el at src lib event c
| 1
|
265,655
| 8,357,370,144
|
IssuesEvent
|
2018-10-02 21:20:56
|
bluek8s/kubedirector
|
https://api.github.com/repos/bluek8s/kubedirector
|
opened
|
clean up metrics service on undeploy
|
Priority: Low Project: KD Lifecycle Type: Enhancement
|
The Operator SDK code creates a metrics service when KubeDirector starts up. We should remove that service on teardown.
Ideally we wouldn't leave that up to the Makefile (since the Makefile didn't create it). For example we could set an owner reference on the service so that it will be garbage-collected by K8s when the KubeDirector deployment is deleted.
|
1.0
|
clean up metrics service on undeploy - The Operator SDK code creates a metrics service when KubeDirector starts up. We should remove that service on teardown.
Ideally we wouldn't leave that up to the Makefile (since the Makefile didn't create it). For example we could set an owner reference on the service so that it will be garbage-collected by K8s when the KubeDirector deployment is deleted.
|
non_defect
|
clean up metrics service on undeploy the operator sdk code creates a metrics service when kubedirector starts up we should remove that service on teardown ideally we wouldn t leave that up to the makefile since the makefile didn t create it for example we could set an owner reference on the service so that it will be garbage collected by when the kubedirector deployment is deleted
| 0
|
62,897
| 17,243,677,602
|
IssuesEvent
|
2021-07-21 04:50:31
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
NoClassDefFoundError thrown when doing a date add functionality in REDSHIFT
|
T: Defect
|
### Expected behavior
It should add the number/column to the date
### Actual behavior
Throwing java.lang.NoClassDefFoundError: com/amazon/redshift/util/RedshiftInterval
Stack trace:-
### Steps to reproduce the problem
```java
String sqlQuery = "select cast(\"users\".\"created\" as date) \"Created\", \"users\".\"frequency\" \"Frequency\", " +
"coalesce((\"users\".\"created\" + (\"users\".\"frequency\" - 1E0) * interval '1 day'), '1970-01-01 00:00:00') \"Date Add\" from \"users\" \"users\"";
ResultQuery jooqQuery = DSL.using(dslContext.configuration()).parser().parseResultQuery(sqlQuery);
```
**Result**:
```
Caused by: java.lang.NoClassDefFoundError: com/amazon/redshift/util/RedshiftInterval
at org.jooq.util.postgres.PostgresUtils.toRedshiftInterval(PostgresUtils.java:270)
at org.jooq.impl.DefaultBinding$DefaultYearToSecondBinding.sqlInline0(DefaultBinding.java:4658)
at org.jooq.impl.DefaultBinding$DefaultYearToSecondBinding.sqlInline0(DefaultBinding.java:4643)
at org.jooq.impl.DefaultBinding$AbstractBinding.sql(DefaultBinding.java:871)
at org.jooq.impl.DefaultBinding$AbstractBinding.sqlCast(DefaultBinding.java:836)
at org.jooq.impl.DefaultBinding$AbstractBinding.sqlCast(DefaultBinding.java:806)
at org.jooq.impl.DefaultBinding$AbstractBinding.sql(DefaultBinding.java:859)
at org.jooq.impl.Val.accept(Val.java:174)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression$DefaultExpression.accept1(Expression.java:1029)
at org.jooq.impl.Expression$DefaultExpression.accept0(Expression.java:1016)
at org.jooq.impl.Expression$DefaultExpression.accept(Expression.java:999)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression.accept0(Expression.java:298)
at org.jooq.impl.AbstractTransformable.accept(AbstractTransformable.java:70)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression$DefaultExpression.accept1(Expression.java:1029)
at org.jooq.impl.Expression$DefaultExpression.accept0(Expression.java:1016)
at org.jooq.impl.Expression$DefaultExpression.accept(Expression.java:999)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression.accept0(Expression.java:298)
at org.jooq.impl.AbstractTransformable.accept(AbstractTransformable.java:70)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.QueryPartCollectionView.acceptElement(QueryPartCollectionView.java:221)
at org.jooq.impl.QueryPartCollectionView.accept(QueryPartCollectionView.java:199)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Function.accept(Function.java:74)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Coalesce.accept(Coalesce.java:78)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:245)
at org.jooq.impl.Alias.toSQLWrapped(Alias.java:360)
at org.jooq.impl.Alias.acceptDeclareAliasStandard(Alias.java:284)
at org.jooq.impl.Alias.accept(Alias.java:175)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.FieldAlias.accept(FieldAlias.java:61)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.QueryPartCollectionView.acceptElement(QueryPartCollectionView.java:221)
at org.jooq.impl.QueryPartCollectionView.accept(QueryPartCollectionView.java:199)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.SelectQueryImpl.toSQLReference0(SelectQueryImpl.java:2192)
at org.jooq.impl.SelectQueryImpl.lambda$toSQLReferenceLimitDefault$21(SelectQueryImpl.java:1856)
at org.jooq.impl.AbstractContext.toggle(AbstractContext.java:331)
at org.jooq.impl.AbstractContext.data(AbstractContext.java:342)
at org.jooq.impl.SelectQueryImpl.toSQLReferenceLimitDefault(SelectQueryImpl.java:1856)
at org.jooq.impl.SelectQueryImpl.accept0(SelectQueryImpl.java:1763)
at org.jooq.impl.SelectQueryImpl.accept(SelectQueryImpl.java:1426)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.AbstractQuery.getSQL0(AbstractQuery.java:479)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:287)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:295)
at org.jooq.impl.ResultQueryTrait.lambda$fetchStream$1(ResultQueryTrait.java:326)
```
### Versions
- jOOQ: 3.15 (extended trial version)
- Java: 11
- Database (include vendor): Amazon Redshift
- OS: Ubuntu 18.04
Let me know if any other information is required
|
1.0
|
NoClassDefFoundError thrown when doing a date add functionality in REDSHIFT - ### Expected behavior
It should add the number/column to the date
### Actual behavior
Throwing java.lang.NoClassDefFoundError: com/amazon/redshift/util/RedshiftInterval
Stack trace:-
### Steps to reproduce the problem
```java
String sqlQuery = "select cast(\"users\".\"created\" as date) \"Created\", \"users\".\"frequency\" \"Frequency\", " +
"coalesce((\"users\".\"created\" + (\"users\".\"frequency\" - 1E0) * interval '1 day'), '1970-01-01 00:00:00') \"Date Add\" from \"users\" \"users\"";
ResultQuery jooqQuery = DSL.using(dslContext.configuration()).parser().parseResultQuery(sqlQuery);
```
**Result**:
```
Caused by: java.lang.NoClassDefFoundError: com/amazon/redshift/util/RedshiftInterval
at org.jooq.util.postgres.PostgresUtils.toRedshiftInterval(PostgresUtils.java:270)
at org.jooq.impl.DefaultBinding$DefaultYearToSecondBinding.sqlInline0(DefaultBinding.java:4658)
at org.jooq.impl.DefaultBinding$DefaultYearToSecondBinding.sqlInline0(DefaultBinding.java:4643)
at org.jooq.impl.DefaultBinding$AbstractBinding.sql(DefaultBinding.java:871)
at org.jooq.impl.DefaultBinding$AbstractBinding.sqlCast(DefaultBinding.java:836)
at org.jooq.impl.DefaultBinding$AbstractBinding.sqlCast(DefaultBinding.java:806)
at org.jooq.impl.DefaultBinding$AbstractBinding.sql(DefaultBinding.java:859)
at org.jooq.impl.Val.accept(Val.java:174)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression$DefaultExpression.accept1(Expression.java:1029)
at org.jooq.impl.Expression$DefaultExpression.accept0(Expression.java:1016)
at org.jooq.impl.Expression$DefaultExpression.accept(Expression.java:999)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression.accept0(Expression.java:298)
at org.jooq.impl.AbstractTransformable.accept(AbstractTransformable.java:70)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression$DefaultExpression.accept1(Expression.java:1029)
at org.jooq.impl.Expression$DefaultExpression.accept0(Expression.java:1016)
at org.jooq.impl.Expression$DefaultExpression.accept(Expression.java:999)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Expression.accept0(Expression.java:298)
at org.jooq.impl.AbstractTransformable.accept(AbstractTransformable.java:70)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.QueryPartCollectionView.acceptElement(QueryPartCollectionView.java:221)
at org.jooq.impl.QueryPartCollectionView.accept(QueryPartCollectionView.java:199)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Function.accept(Function.java:74)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.Coalesce.accept(Coalesce.java:78)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:245)
at org.jooq.impl.Alias.toSQLWrapped(Alias.java:360)
at org.jooq.impl.Alias.acceptDeclareAliasStandard(Alias.java:284)
at org.jooq.impl.Alias.accept(Alias.java:175)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.FieldAlias.accept(FieldAlias.java:61)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.QueryPartCollectionView.acceptElement(QueryPartCollectionView.java:221)
at org.jooq.impl.QueryPartCollectionView.accept(QueryPartCollectionView.java:199)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.SelectQueryImpl.toSQLReference0(SelectQueryImpl.java:2192)
at org.jooq.impl.SelectQueryImpl.lambda$toSQLReferenceLimitDefault$21(SelectQueryImpl.java:1856)
at org.jooq.impl.AbstractContext.toggle(AbstractContext.java:331)
at org.jooq.impl.AbstractContext.data(AbstractContext.java:342)
at org.jooq.impl.SelectQueryImpl.toSQLReferenceLimitDefault(SelectQueryImpl.java:1856)
at org.jooq.impl.SelectQueryImpl.accept0(SelectQueryImpl.java:1763)
at org.jooq.impl.SelectQueryImpl.accept(SelectQueryImpl.java:1426)
at org.jooq.impl.DefaultRenderContext.visit0(DefaultRenderContext.java:711)
at org.jooq.impl.AbstractContext.visit(AbstractContext.java:294)
at org.jooq.impl.AbstractQuery.getSQL0(AbstractQuery.java:479)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:287)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:295)
at org.jooq.impl.ResultQueryTrait.lambda$fetchStream$1(ResultQueryTrait.java:326)
```
### Versions
- jOOQ: 3.15 (extended trial version)
- Java: 11
- Database (include vendor): Amazon Redshift
- OS: Ubuntu 18.04
Let me know if any other information is required
|
defect
|
noclassdeffounderror thrown when doing a date add functionality in redshift expected behavior it should add the number column to the date actual behavior throwing java lang noclassdeffounderror com amazon redshift util redshiftinterval stack trace steps to reproduce the problem java string sqlquery select cast users created as date created users frequency frequency coalesce users created users frequency interval day date add from users users resultquery jooqquery dsl using dslcontext configuration parser parseresultquery sqlquery result caused by java lang noclassdeffounderror com amazon redshift util redshiftinterval at org jooq util postgres postgresutils toredshiftinterval postgresutils java at org jooq impl defaultbinding defaultyeartosecondbinding defaultbinding java at org jooq impl defaultbinding defaultyeartosecondbinding defaultbinding java at org jooq impl defaultbinding abstractbinding sql defaultbinding java at org jooq impl defaultbinding abstractbinding sqlcast defaultbinding java at org jooq impl defaultbinding abstractbinding sqlcast defaultbinding java at org jooq impl defaultbinding abstractbinding sql defaultbinding java at org jooq impl val accept val java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl expression defaultexpression expression java at org jooq impl expression defaultexpression expression java at org jooq impl expression defaultexpression accept expression java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl expression expression java at org jooq impl abstracttransformable accept abstracttransformable java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl expression defaultexpression expression java at org jooq impl expression defaultexpression expression java at org jooq impl expression defaultexpression accept expression java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl expression expression java at org jooq impl abstracttransformable accept abstracttransformable java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl querypartcollectionview acceptelement querypartcollectionview java at org jooq impl querypartcollectionview accept querypartcollectionview java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl function accept function java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl coalesce accept coalesce java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl alias tosqlwrapped alias java at org jooq impl alias acceptdeclarealiasstandard alias java at org jooq impl alias accept alias java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl fieldalias accept fieldalias java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl querypartcollectionview acceptelement querypartcollectionview java at org jooq impl querypartcollectionview accept querypartcollectionview java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl selectqueryimpl selectqueryimpl java at org jooq impl selectqueryimpl lambda tosqlreferencelimitdefault selectqueryimpl java at org jooq impl abstractcontext toggle abstractcontext java at org jooq impl abstractcontext data abstractcontext java at org jooq impl selectqueryimpl tosqlreferencelimitdefault selectqueryimpl java at org jooq impl selectqueryimpl selectqueryimpl java at org jooq impl selectqueryimpl accept selectqueryimpl java at org jooq impl defaultrendercontext defaultrendercontext java at org jooq impl abstractcontext visit abstractcontext java at org jooq impl abstractquery abstractquery java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractresultquery fetchlazy abstractresultquery java at org jooq impl resultquerytrait lambda fetchstream resultquerytrait java versions jooq extended trial version java database include vendor amazon redshift os ubuntu let me know if any other information is required
| 1
|
56,835
| 15,387,654,258
|
IssuesEvent
|
2021-03-03 09:48:20
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Galleria thumbnail animation jumps
|
defect
|
When an image is selected from thumbnail, animation is not smooth and causes a jump.
|
1.0
|
Galleria thumbnail animation jumps - When an image is selected from thumbnail, animation is not smooth and causes a jump.
|
defect
|
galleria thumbnail animation jumps when an image is selected from thumbnail animation is not smooth and causes a jump
| 1
|
81,768
| 31,561,991,274
|
IssuesEvent
|
2023-09-03 11:18:50
|
spockframework/spock
|
https://api.github.com/repos/spockframework/spock
|
closed
|
Internal junit dependency causes issues in OSGI container
|
Module-Core Type-Defect
|
Originally reported on Google Code with ID 188
```
We are using spock in OSGI for integration tests, but there is apparently some dependencies
on internal junit classes. Namely:
import org.junit.internal.runners.model.MultipleFailureException
http://www.google.com/codesearch#kd_PCnP8UZc/trunk/spock-core/src/main/groovy/spock/util/EmbeddedSpecCompiler.groovy&q=internal%20package:http://spock%5C.googlecode%5C.com
We have sorta worked around the issue by embedding junit within the spock bundle and
reexporting it but this creates other problems where if another bundle exports junit
first then its used and internal class cannot be accessed.
As an additional enhancement it would be great if spock jar included the osgi manifest
so that it would not need to be repackaged.
Here is the maven pom file used to wrap spock jar in an osgi bundle.
https://gist.github.com/1044946
```
Reported by `kurtharriger` on 2011-06-24 15:02:18
|
1.0
|
Internal junit dependency causes issues in OSGI container - Originally reported on Google Code with ID 188
```
We are using spock in OSGI for integration tests, but there is apparently some dependencies
on internal junit classes. Namely:
import org.junit.internal.runners.model.MultipleFailureException
http://www.google.com/codesearch#kd_PCnP8UZc/trunk/spock-core/src/main/groovy/spock/util/EmbeddedSpecCompiler.groovy&q=internal%20package:http://spock%5C.googlecode%5C.com
We have sorta worked around the issue by embedding junit within the spock bundle and
reexporting it but this creates other problems where if another bundle exports junit
first then its used and internal class cannot be accessed.
As an additional enhancement it would be great if spock jar included the osgi manifest
so that it would not need to be repackaged.
Here is the maven pom file used to wrap spock jar in an osgi bundle.
https://gist.github.com/1044946
```
Reported by `kurtharriger` on 2011-06-24 15:02:18
|
defect
|
internal junit dependency causes issues in osgi container originally reported on google code with id we are using spock in osgi for integration tests but there is apparently some dependencies on internal junit classes namely import org junit internal runners model multiplefailureexception we have sorta worked around the issue by embedding junit within the spock bundle and reexporting it but this creates other problems where if another bundle exports junit first then its used and internal class cannot be accessed as an additional enhancement it would be great if spock jar included the osgi manifest so that it would not need to be repackaged here is the maven pom file used to wrap spock jar in an osgi bundle reported by kurtharriger on
| 1
|
42,403
| 11,016,606,872
|
IssuesEvent
|
2019-12-05 06:01:10
|
pymc-devs/pymc3
|
https://api.github.com/repos/pymc-devs/pymc3
|
closed
|
Compilation error for large number of categorical features
|
defects
|
I'm not sure if using a large number of categorical variables is an abuse of pymc, or if I'm just doing it wrong. I've reproduced the error with synthetic data with 500 possible string values for feature `X` (though the error appears with fewer values, like 250). I'm using the `glm` module with the following model, which I can get working in statsmodels: `glm('Y ~ C(X)'`:
```
import itertools as it
import string
def f(st):
return ord(st[0]) + ord(st[1]) + np.random.randn()
wds = map(''.join, it.islice(it.permutations(string.ascii_uppercase, 2), 500))
wd_dat = np.random.choice(wds, 5000)
y = map(f, wd_dat)
data = pd.DataFrame(dict(Y=y, X=wd_dat))
data[:4]
Out[49]:
X Y
0 JA 139.636050
1 GU 156.806869
2 FZ 161.310029
3 HU 157.979341
```
When I try to run the following model
```
with mc.Model() as model:
mc.glm.glm('Y ~ C(X)', data)
trace = mc.sample(2000, mc.NUTS(), progressbar=True)
```
I get `Exception: ('Compilation failed (return status=1): /Users/me/.theano/compiledir_Darwin-13.3.0-x86_64-i386-64bit-i386-2.7.8-64/tmp_cmBr5/mod.cpp:28159:32: fatal error: bracket nesting level exceeded maximum of 256.`
[full trace here](https://gist.github.com/d10genes/b970338c65aa60a9341c).
Is this expected?
|
1.0
|
Compilation error for large number of categorical features - I'm not sure if using a large number of categorical variables is an abuse of pymc, or if I'm just doing it wrong. I've reproduced the error with synthetic data with 500 possible string values for feature `X` (though the error appears with fewer values, like 250). I'm using the `glm` module with the following model, which I can get working in statsmodels: `glm('Y ~ C(X)'`:
```
import itertools as it
import string
def f(st):
return ord(st[0]) + ord(st[1]) + np.random.randn()
wds = map(''.join, it.islice(it.permutations(string.ascii_uppercase, 2), 500))
wd_dat = np.random.choice(wds, 5000)
y = map(f, wd_dat)
data = pd.DataFrame(dict(Y=y, X=wd_dat))
data[:4]
Out[49]:
X Y
0 JA 139.636050
1 GU 156.806869
2 FZ 161.310029
3 HU 157.979341
```
When I try to run the following model
```
with mc.Model() as model:
mc.glm.glm('Y ~ C(X)', data)
trace = mc.sample(2000, mc.NUTS(), progressbar=True)
```
I get `Exception: ('Compilation failed (return status=1): /Users/me/.theano/compiledir_Darwin-13.3.0-x86_64-i386-64bit-i386-2.7.8-64/tmp_cmBr5/mod.cpp:28159:32: fatal error: bracket nesting level exceeded maximum of 256.`
[full trace here](https://gist.github.com/d10genes/b970338c65aa60a9341c).
Is this expected?
|
defect
|
compilation error for large number of categorical features i m not sure if using a large number of categorical variables is an abuse of pymc or if i m just doing it wrong i ve reproduced the error with synthetic data with possible string values for feature x though the error appears with fewer values like i m using the glm module with the following model which i can get working in statsmodels glm y c x import itertools as it import string def f st return ord st ord st np random randn wds map join it islice it permutations string ascii uppercase wd dat np random choice wds y map f wd dat data pd dataframe dict y y x wd dat data out x y ja gu fz hu when i try to run the following model with mc model as model mc glm glm y c x data trace mc sample mc nuts progressbar true i get exception compilation failed return status users me theano compiledir darwin tmp mod cpp fatal error bracket nesting level exceeded maximum of is this expected
| 1
|
1,340
| 2,603,837,936
|
IssuesEvent
|
2015-02-24 18:13:49
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳龟头肉牙
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳龟头肉牙〓沈陽軍區政治部醫院性病〓TEL:024-31023308〓��
�立于1946年,68年專注于性傳播疾病的研究和治療。位于沈陽�
��沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史悠�
��、設備精良、技術權威、專家云集,是預防、保健、醫療、
科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫��
�、全國首批醫療規范定點單位,是第四軍醫大學、東南大學�
��知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤部
衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 6:59
|
1.0
|
沈阳龟头肉牙 - ```
沈阳龟头肉牙〓沈陽軍區政治部醫院性病〓TEL:024-31023308〓��
�立于1946年,68年專注于性傳播疾病的研究和治療。位于沈陽�
��沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史悠�
��、設備精良、技術權威、專家云集,是預防、保健、醫療、
科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫��
�、全國首批醫療規范定點單位,是第四軍醫大學、東南大學�
��知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤部
衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 6:59
|
defect
|
沈阳龟头肉牙 沈阳龟头肉牙〓沈陽軍區政治部醫院性病〓tel: 〓�� � , 。位于沈陽� �� 。是一所與新中國同建立共輝煌的歷史悠� ��、設備精良、技術權威、專家云集,是預防、保健、醫療、 科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫�� �、全國首批醫療規范定點單位,是第四軍醫大學、東南大學� ��知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤部 衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。 original issue reported on code google com by gmail com on jun at
| 1
|
47,205
| 13,056,054,020
|
IssuesEvent
|
2020-07-30 03:30:55
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
BUILD_* doesn't toggle build of pybindings (Trac #130)
|
Migrated from Trac cmake defect
|
then you get a failure at cmake time
Migrated from https://code.icecube.wisc.edu/ticket/130
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"description": "then you get a failure at cmake time",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1416713876900096",
"component": "cmake",
"summary": "BUILD_* doesn't toggle build of pybindings",
"priority": "normal",
"keywords": "",
"time": "2008-09-07T18:34:09",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
1.0
|
BUILD_* doesn't toggle build of pybindings (Trac #130) - then you get a failure at cmake time
Migrated from https://code.icecube.wisc.edu/ticket/130
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"description": "then you get a failure at cmake time",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1416713876900096",
"component": "cmake",
"summary": "BUILD_* doesn't toggle build of pybindings",
"priority": "normal",
"keywords": "",
"time": "2008-09-07T18:34:09",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
defect
|
build doesn t toggle build of pybindings trac then you get a failure at cmake time migrated from json status closed changetime description then you get a failure at cmake time reporter troy cc resolution fixed ts component cmake summary build doesn t toggle build of pybindings priority normal keywords time milestone owner troy type defect
| 1
|
15,481
| 2,856,531,839
|
IssuesEvent
|
2015-06-02 15:21:50
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
Make combine_csv.py work if there isn't a variable called 'id'
|
C: MOOSE Scripts P: normal T: defect
|
The combine_csv.py script looks for a variable called "id" (used for x values), as well as the variable that the user specifies, which is used for the y values in the plot. Modify it so that the user can specify an optional name for the variable for x values and a name for the variable for the y values.
|
1.0
|
Make combine_csv.py work if there isn't a variable called 'id' - The combine_csv.py script looks for a variable called "id" (used for x values), as well as the variable that the user specifies, which is used for the y values in the plot. Modify it so that the user can specify an optional name for the variable for x values and a name for the variable for the y values.
|
defect
|
make combine csv py work if there isn t a variable called id the combine csv py script looks for a variable called id used for x values as well as the variable that the user specifies which is used for the y values in the plot modify it so that the user can specify an optional name for the variable for x values and a name for the variable for the y values
| 1
|
146,932
| 23,142,173,873
|
IssuesEvent
|
2022-07-28 19:38:59
|
department-of-veterans-affairs/vets-design-system-documentation
|
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
|
closed
|
Add no wrap definition
|
component-update vsp-design-system-team va-telephone dst-engineering
|
# Feature Request
- [x] I’ve searched for any related issues and avoided creating a duplicate issue.
## Is this feature request relating to an existing component or utility? Please describe.
- Component/utility name: Telephone
Having a telephone number wrap to the next line makes it harder to read. Making it not wrap would improve readability.
## Do you have a suggestion for a new component or utility?
I am suggesting a new utility definition is added that allows a block of content to not be wrapped
```css
.no-wrap {
white-space: nowrap;
}
```
Within the foundation style, there are multiple instances of the [`nowrap` value being used](https://github.com/department-of-veterans-affairs/veteran-facing-services-tools/search?q=white-space%3A+nowrap&unscoped_q=white-space%3A+nowrap).
- Pagination links
- Icon links
- Modal buttons
- Exit icon (mixin)
- Login container
Within the `vets-website` repo, this property has been added to at least a dozen custom stylesheets. Not all are related to telephone numbers.
|
1.0
|
Add no wrap definition - # Feature Request
- [x] I’ve searched for any related issues and avoided creating a duplicate issue.
## Is this feature request relating to an existing component or utility? Please describe.
- Component/utility name: Telephone
Having a telephone number wrap to the next line makes it harder to read. Making it not wrap would improve readability.
## Do you have a suggestion for a new component or utility?
I am suggesting a new utility definition is added that allows a block of content to not be wrapped
```css
.no-wrap {
white-space: nowrap;
}
```
Within the foundation style, there are multiple instances of the [`nowrap` value being used](https://github.com/department-of-veterans-affairs/veteran-facing-services-tools/search?q=white-space%3A+nowrap&unscoped_q=white-space%3A+nowrap).
- Pagination links
- Icon links
- Modal buttons
- Exit icon (mixin)
- Login container
Within the `vets-website` repo, this property has been added to at least a dozen custom stylesheets. Not all are related to telephone numbers.
|
non_defect
|
add no wrap definition feature request i’ve searched for any related issues and avoided creating a duplicate issue is this feature request relating to an existing component or utility please describe component utility name telephone having a telephone number wrap to the next line makes it harder to read making it not wrap would improve readability do you have a suggestion for a new component or utility i am suggesting a new utility definition is added that allows a block of content to not be wrapped css no wrap white space nowrap within the foundation style there are multiple instances of the pagination links icon links modal buttons exit icon mixin login container within the vets website repo this property has been added to at least a dozen custom stylesheets not all are related to telephone numbers
| 0
|
19,612
| 3,228,437,920
|
IssuesEvent
|
2015-10-12 02:06:09
|
essandess/etv-comskip
|
https://api.github.com/repos/essandess/etv-comskip
|
closed
|
Commercial Timing
|
auto-migrated Priority-Medium Type-Defect
|
```
I have EyETV setup to record (Comskip compacts auto) and export to iTunes
automatically and it all work.
The problem that I have is that ETVComskip doesn't always do a great job of
marking the actual start and finish of the commercial. It messes up the show
when the action scene is cut and the commercial plays even when they're not
supposed to play.
Is there a way to adjust the commercial threshold sensitivity?
Thanks
Larry
```
Original issue reported on code.google.com by `larrymci...@gmail.com` on 10 Oct 2010 at 2:48
|
1.0
|
Commercial Timing - ```
I have EyETV setup to record (Comskip compacts auto) and export to iTunes
automatically and it all work.
The problem that I have is that ETVComskip doesn't always do a great job of
marking the actual start and finish of the commercial. It messes up the show
when the action scene is cut and the commercial plays even when they're not
supposed to play.
Is there a way to adjust the commercial threshold sensitivity?
Thanks
Larry
```
Original issue reported on code.google.com by `larrymci...@gmail.com` on 10 Oct 2010 at 2:48
|
defect
|
commercial timing i have eyetv setup to record comskip compacts auto and export to itunes automatically and it all work the problem that i have is that etvcomskip doesn t always do a great job of marking the actual start and finish of the commercial it messes up the show when the action scene is cut and the commercial plays even when they re not supposed to play is there a way to adjust the commercial threshold sensitivity thanks larry original issue reported on code google com by larrymci gmail com on oct at
| 1
|
39,906
| 9,742,372,827
|
IssuesEvent
|
2019-06-02 16:34:26
|
techo/voluntariado-eventual
|
https://api.github.com/repos/techo/voluntariado-eventual
|
closed
|
FE: La plantilla de mail para el "recordatorio de actividad"
|
Defecto
|
**Describí el error**
Falta la planilla de mail para el recordatorio de actividad (un dia antes que empiece la actividad)
**Para reproducirlo**
Pasos para reproducir el comportamiento:
-. TESTING (https://ar.sandbox.actividades.techo.org/)
-. Actividad Inscripta
-. Ya que depende de un cron job que no esta configurado (por Ale Abraham)
**Comportamiento esperando**
Ya que depende de un cron job
**Capturas de pantalla**
Si aplica, agregá capturas de pantalla para explicar el problema
**Si estás en una computadora (por favor completá la siguiente información):**
- Navegador [por ejemplo: chrome, explorar, safari]
**Smartphone (completá la siguiente informaicón):**
- Dispositivo: [por ejemplo: Huawei GW, iPhone6, Samsung J2]
- Sistema operativo: [por ejemplo: Android4, iOS8.1]
- Navegador [por ejemplo: navegador del celu, Chrome, Safari]
**Contexto adicional**
Toda otra cosa que ayude a explicar lo que pasó.
|
1.0
|
FE: La plantilla de mail para el "recordatorio de actividad" - **Describí el error**
Falta la planilla de mail para el recordatorio de actividad (un dia antes que empiece la actividad)
**Para reproducirlo**
Pasos para reproducir el comportamiento:
-. TESTING (https://ar.sandbox.actividades.techo.org/)
-. Actividad Inscripta
-. Ya que depende de un cron job que no esta configurado (por Ale Abraham)
**Comportamiento esperando**
Ya que depende de un cron job
**Capturas de pantalla**
Si aplica, agregá capturas de pantalla para explicar el problema
**Si estás en una computadora (por favor completá la siguiente información):**
- Navegador [por ejemplo: chrome, explorar, safari]
**Smartphone (completá la siguiente informaicón):**
- Dispositivo: [por ejemplo: Huawei GW, iPhone6, Samsung J2]
- Sistema operativo: [por ejemplo: Android4, iOS8.1]
- Navegador [por ejemplo: navegador del celu, Chrome, Safari]
**Contexto adicional**
Toda otra cosa que ayude a explicar lo que pasó.
|
defect
|
fe la plantilla de mail para el recordatorio de actividad describí el error falta la planilla de mail para el recordatorio de actividad un dia antes que empiece la actividad para reproducirlo pasos para reproducir el comportamiento testing actividad inscripta ya que depende de un cron job que no esta configurado por ale abraham comportamiento esperando ya que depende de un cron job capturas de pantalla si aplica agregá capturas de pantalla para explicar el problema si estás en una computadora por favor completá la siguiente información navegador smartphone completá la siguiente informaicón dispositivo sistema operativo navegador contexto adicional toda otra cosa que ayude a explicar lo que pasó
| 1
|
24,971
| 4,159,203,852
|
IssuesEvent
|
2016-06-17 08:03:36
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Creating index with wrong key-attribute definition does not throw an error
|
Team: Core Type: Defect
|
When creating indexes on keys you can define wrong specs without an error being thrown.
Correct:
```
map.addIndex("__key#login");
```
Wrong:
```
map.addIndex("__key.login");
```
Anyhow the latter does neither throw an exception, nor is it automatically corrected. The index is created but empty and will never be used. This results in extremely poor performance of queries without any visible reason (as a full table scan is executed).
|
1.0
|
Creating index with wrong key-attribute definition does not throw an error - When creating indexes on keys you can define wrong specs without an error being thrown.
Correct:
```
map.addIndex("__key#login");
```
Wrong:
```
map.addIndex("__key.login");
```
Anyhow the latter does neither throw an exception, nor is it automatically corrected. The index is created but empty and will never be used. This results in extremely poor performance of queries without any visible reason (as a full table scan is executed).
|
defect
|
creating index with wrong key attribute definition does not throw an error when creating indexes on keys you can define wrong specs without an error being thrown correct map addindex key login wrong map addindex key login anyhow the latter does neither throw an exception nor is it automatically corrected the index is created but empty and will never be used this results in extremely poor performance of queries without any visible reason as a full table scan is executed
| 1
|
222,126
| 7,428,391,827
|
IssuesEvent
|
2018-03-24 01:02:24
|
jsonwebtoken/jsonwebtoken.github.io
|
https://api.github.com/repos/jsonwebtoken/jsonwebtoken.github.io
|
closed
|
Text input does not work on iOS/Android
|
bug high-priority stage-3
|
There are a few issues I found:
When clicking on the textbox and holding the delete key only one character is deleted and the delete key is released. The expected iOS behavior is that you can click and hold to continually delete.
You cannot click and hold to select the text thus making it difficult to clear the textbook.
You cannot click and hold to copy or paste. Paste is most problematic because it makes the site impossible to use on iOS unless you memorize and type the key manually.
Here is a video of the interactions. At then end you will see me trying to click, click and hold, etc.
https://www.dropbox.com/s/26bw8lkp380k8lv/2015-02-06_12-24-24.mp4?dl=0
My guess is we are just intercepting too many events incorrectly. One nice thing would be on mobile browsers that once you click on the text box it automatically clears then the user can easily paste in the box.
|
1.0
|
Text input does not work on iOS/Android - There are a few issues I found:
When clicking on the textbox and holding the delete key only one character is deleted and the delete key is released. The expected iOS behavior is that you can click and hold to continually delete.
You cannot click and hold to select the text thus making it difficult to clear the textbook.
You cannot click and hold to copy or paste. Paste is most problematic because it makes the site impossible to use on iOS unless you memorize and type the key manually.
Here is a video of the interactions. At then end you will see me trying to click, click and hold, etc.
https://www.dropbox.com/s/26bw8lkp380k8lv/2015-02-06_12-24-24.mp4?dl=0
My guess is we are just intercepting too many events incorrectly. One nice thing would be on mobile browsers that once you click on the text box it automatically clears then the user can easily paste in the box.
|
non_defect
|
text input does not work on ios android there are a few issues i found when clicking on the textbox and holding the delete key only one character is deleted and the delete key is released the expected ios behavior is that you can click and hold to continually delete you cannot click and hold to select the text thus making it difficult to clear the textbook you cannot click and hold to copy or paste paste is most problematic because it makes the site impossible to use on ios unless you memorize and type the key manually here is a video of the interactions at then end you will see me trying to click click and hold etc my guess is we are just intercepting too many events incorrectly one nice thing would be on mobile browsers that once you click on the text box it automatically clears then the user can easily paste in the box
| 0
|
3,037
| 2,535,974,985
|
IssuesEvent
|
2015-01-26 09:47:03
|
nlbdev/nordic-epub3-dtbook-migrator
|
https://api.github.com/repos/nlbdev/nordic-epub3-dtbook-migrator
|
opened
|
Wrong list depth when notes contains lists
|
2 - High priority bug epub3-to-dtbook
|
EPUB footnotes list structure contains child lists. So far so good. But when this converts to dtbook, the epub:type list items convert to note elements where some have child list elements. On those lists the depth values seem to be preserved from the previous list hierarchy. This generates of course the error “The depth attribute on list element does not contain the list wrapping level.”
|
1.0
|
Wrong list depth when notes contains lists - EPUB footnotes list structure contains child lists. So far so good. But when this converts to dtbook, the epub:type list items convert to note elements where some have child list elements. On those lists the depth values seem to be preserved from the previous list hierarchy. This generates of course the error “The depth attribute on list element does not contain the list wrapping level.”
|
non_defect
|
wrong list depth when notes contains lists epub footnotes list structure contains child lists so far so good but when this converts to dtbook the epub type list items convert to note elements where some have child list elements on those lists the depth values seem to be preserved from the previous list hierarchy this generates of course the error “the depth attribute on list element does not contain the list wrapping level ”
| 0
|
77,895
| 27,219,490,693
|
IssuesEvent
|
2023-02-21 03:07:32
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
p:panelGrid does not responsive when there are 5 columns
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
5 columns does not responsive, other values is ok. when columns attribute is 5 output html is:
<div class="ui-panelgrid-cell null"><span class="font-bold" style="color:rgb(255, 99, 132);">1</span></div>
class is null, expected value is ui-md-3
### Reproducer
<p:panelGrid columns="5" styleClass="showcase-text-align-center" layout="grid">
<h:outputText value="1" class="font-bold"/>
<h:outputText value="2" class="font-bold"/>
<h:outputText value="3" class="font-bold"/>
<h:outputText value="4" class="font-bold"/>
<h:outputText value="5" class="font-bold"/>
<h:outputText value="6" class="font-bold"/>
<h:outputText value="7" class="font-bold"/>
<h:outputText value="8" class="font-bold"/>
<h:outputText value="9" class="font-bold"/>
<h:outputText value="10" class="font-bold"/>
</p:panelGrid>
### Expected behavior
_No response_
### PrimeFaces edition
None
### PrimeFaces version
11.0.0, 12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.2.20
### Java version
8
### Browser(s)
_No response_
|
1.0
|
p:panelGrid does not responsive when there are 5 columns - ### Describe the bug
5 columns does not responsive, other values is ok. when columns attribute is 5 output html is:
<div class="ui-panelgrid-cell null"><span class="font-bold" style="color:rgb(255, 99, 132);">1</span></div>
class is null, expected value is ui-md-3
### Reproducer
<p:panelGrid columns="5" styleClass="showcase-text-align-center" layout="grid">
<h:outputText value="1" class="font-bold"/>
<h:outputText value="2" class="font-bold"/>
<h:outputText value="3" class="font-bold"/>
<h:outputText value="4" class="font-bold"/>
<h:outputText value="5" class="font-bold"/>
<h:outputText value="6" class="font-bold"/>
<h:outputText value="7" class="font-bold"/>
<h:outputText value="8" class="font-bold"/>
<h:outputText value="9" class="font-bold"/>
<h:outputText value="10" class="font-bold"/>
</p:panelGrid>
### Expected behavior
_No response_
### PrimeFaces edition
None
### PrimeFaces version
11.0.0, 12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.2.20
### Java version
8
### Browser(s)
_No response_
|
defect
|
p panelgrid does not responsive when there are columns describe the bug columns does not responsive other values is ok when columns attribute is output html is class is null expected value is ui md reproducer expected behavior no response primefaces edition none primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
| 1
|
10,988
| 4,119,787,018
|
IssuesEvent
|
2016-06-08 15:50:49
|
pywbem/pywbem
|
https://api.github.com/repos/pywbem/pywbem
|
opened
|
Python 3 cim_operations.py, returns resource warning excessively
|
area: code
|
When running against the server with python 3, the socket generates enormous numbers of resource warnings. We ignored this before but now that I have tests in run_cim_operations.py that run hundreds of operations against the server it is an enormous slowdonw. (Think 3 to 1 time execution).
I don't think we can depend on the client to clean this up for us.
While most of the bitches about this on the net involve requests, it is clear that the url libraries are involved and that is is not an error but an actual decision involving there socket pools. If I clear them out of run_operations.py, everyone involved is pywbem will have to sort out how to do the same thing I would guess, certainly any command line client.
By the way, I do not have a clean answer yet but will test the warining code on a temporarly basis.
|
1.0
|
Python 3 cim_operations.py, returns resource warning excessively - When running against the server with python 3, the socket generates enormous numbers of resource warnings. We ignored this before but now that I have tests in run_cim_operations.py that run hundreds of operations against the server it is an enormous slowdonw. (Think 3 to 1 time execution).
I don't think we can depend on the client to clean this up for us.
While most of the bitches about this on the net involve requests, it is clear that the url libraries are involved and that is is not an error but an actual decision involving there socket pools. If I clear them out of run_operations.py, everyone involved is pywbem will have to sort out how to do the same thing I would guess, certainly any command line client.
By the way, I do not have a clean answer yet but will test the warining code on a temporarly basis.
|
non_defect
|
python cim operations py returns resource warning excessively when running against the server with python the socket generates enormous numbers of resource warnings we ignored this before but now that i have tests in run cim operations py that run hundreds of operations against the server it is an enormous slowdonw think to time execution i don t think we can depend on the client to clean this up for us while most of the bitches about this on the net involve requests it is clear that the url libraries are involved and that is is not an error but an actual decision involving there socket pools if i clear them out of run operations py everyone involved is pywbem will have to sort out how to do the same thing i would guess certainly any command line client by the way i do not have a clean answer yet but will test the warining code on a temporarly basis
| 0
|
39,357
| 9,414,700,494
|
IssuesEvent
|
2019-04-10 10:47:14
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
opened
|
[Bug] does not display the cooldown of the second ability if Spell ID is entered instead of names
|
defect
|
**What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
8.6.0
**What steps will reproduce the problem?**
1. Create Spell Cooldown icon.
2. In the field "What to track" enter 2 Spell ID (e. g. 262161; 167105)
3.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
If you enter the names of abilities - the icon works
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
^1^T^SShowTimer^B ^SType^Scooldown ^SShowTimerText^B ^SName^S262161;~`167105 ^SShowTimerTextnoOCC^B ^SClockGCD^B ^SStates^T ^N2^T ^SAlpha^N0.5 ^t^t^SRangeCheck^B ^SEnabled^B ^t^N86006^S~`~| ^Sicon^^
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
|
1.0
|
[Bug] does not display the cooldown of the second ability if Spell ID is entered instead of names - **What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
8.6.0
**What steps will reproduce the problem?**
1. Create Spell Cooldown icon.
2. In the field "What to track" enter 2 Spell ID (e. g. 262161; 167105)
3.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
If you enter the names of abilities - the icon works
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
^1^T^SShowTimer^B ^SType^Scooldown ^SShowTimerText^B ^SName^S262161;~`167105 ^SShowTimerTextnoOCC^B ^SClockGCD^B ^SStates^T ^N2^T ^SAlpha^N0.5 ^t^t^SRangeCheck^B ^SEnabled^B ^t^N86006^S~`~| ^Sicon^^
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
|
defect
|
does not display the cooldown of the second ability if spell id is entered instead of names what version of tellmewhen are you using what steps will reproduce the problem create spell cooldown icon in the field what to track enter spell id e g what do you expect to happen what happens instead if you enter the names of abilities the icon works screenshots and export strings if your issue pertains to a specific icon or group please post the relevant export string s t sshowtimer b stype scooldown sshowtimertext b sname sshowtimertextnoocc b sclockgcd b sstates t t salpha t t srangecheck b senabled b t s sicon to get an export string open the icon editor and click the button labeled import export backup select the to string option for the appropriate export type icon group or profile and then press ctrl c to copy it to your clipboard additionally if applicable add screenshots to help explain your problem you can paste images directly into github issues or you can upload files as well additional info
| 1
|
7,386
| 6,882,414,871
|
IssuesEvent
|
2017-11-21 03:49:08
|
comp413-2017/RDFS
|
https://api.github.com/repos/comp413-2017/RDFS
|
closed
|
webRDFS: 404 handler
|
security
|
Implement a spec-compliant (if one exists) response for requests to routes for which no handler is defined. If webHDFS spec doesn't have strong opinions, we should at least respond to the client (current behavior is that sever handler thread terminates without every writing a response back to the client).
|
True
|
webRDFS: 404 handler - Implement a spec-compliant (if one exists) response for requests to routes for which no handler is defined. If webHDFS spec doesn't have strong opinions, we should at least respond to the client (current behavior is that sever handler thread terminates without every writing a response back to the client).
|
non_defect
|
webrdfs handler implement a spec compliant if one exists response for requests to routes for which no handler is defined if webhdfs spec doesn t have strong opinions we should at least respond to the client current behavior is that sever handler thread terminates without every writing a response back to the client
| 0
|
52,904
| 13,225,217,443
|
IssuesEvent
|
2020-08-17 20:43:43
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
geometry renderer specifies center of detector (Trac #558)
|
Migrated from Trac defect glshovel
|
for km3net... (0,0,0) isn't in the center of their detector.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/558">https://code.icecube.wisc.edu/projects/icecube/ticket/558</a>, reported by troyand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-07-20T22:14:29",
"_ts": "1248128069000000",
"description": "for km3net... (0,0,0) isn't in the center of their detector.",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"time": "2009-06-12T13:52:20",
"component": "glshovel",
"summary": "geometry renderer specifies center of detector",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
geometry renderer specifies center of detector (Trac #558) - for km3net... (0,0,0) isn't in the center of their detector.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/558">https://code.icecube.wisc.edu/projects/icecube/ticket/558</a>, reported by troyand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-07-20T22:14:29",
"_ts": "1248128069000000",
"description": "for km3net... (0,0,0) isn't in the center of their detector.",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"time": "2009-06-12T13:52:20",
"component": "glshovel",
"summary": "geometry renderer specifies center of detector",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
geometry renderer specifies center of detector trac for isn t in the center of their detector migrated from json status closed changetime ts description for isn t in the center of their detector reporter troy cc resolution fixed time component glshovel summary geometry renderer specifies center of detector priority normal keywords milestone owner troy type defect
| 1
|
65,392
| 19,479,676,132
|
IssuesEvent
|
2021-12-25 01:29:49
|
rxgx/dotfiles
|
https://api.github.com/repos/rxgx/dotfiles
|
closed
|
Volta config has hard coded user dir
|
defect
|
### What's expected?
I should be able to use [Volta](https://volta.sh) on any system
### What's happening?
Volta is configured for only when "rxgx" is the user. For example:
```shell
export VOLTA_HOME="/Users/rxgx/.volta"
```
|
1.0
|
Volta config has hard coded user dir - ### What's expected?
I should be able to use [Volta](https://volta.sh) on any system
### What's happening?
Volta is configured for only when "rxgx" is the user. For example:
```shell
export VOLTA_HOME="/Users/rxgx/.volta"
```
|
defect
|
volta config has hard coded user dir what s expected i should be able to use on any system what s happening volta is configured for only when rxgx is the user for example shell export volta home users rxgx volta
| 1
|
14,697
| 2,831,388,467
|
IssuesEvent
|
2015-05-24 15:53:22
|
nobodyguy/dslrdashboard
|
https://api.github.com/repos/nobodyguy/dslrdashboard
|
closed
|
Bracketing D5200
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.D5200
2 . bracketing
3.0 ev =30 seconds with 3+ & 3- bracketed shots at 1ev each
What is the expected output? What do you see instead? When bracketing photos
with a shutter speed over 30 seconds, it freezes the camera and the app. I am
using 30.33 on Samsung Galaxy Note 3 (kitkat) and Nikon D5200 in m mode.
```
Original issue reported on code.google.com by `cpd5...@gmail.com` on 5 May 2014 at 3:28
|
1.0
|
Bracketing D5200 - ```
What steps will reproduce the problem?
1.D5200
2 . bracketing
3.0 ev =30 seconds with 3+ & 3- bracketed shots at 1ev each
What is the expected output? What do you see instead? When bracketing photos
with a shutter speed over 30 seconds, it freezes the camera and the app. I am
using 30.33 on Samsung Galaxy Note 3 (kitkat) and Nikon D5200 in m mode.
```
Original issue reported on code.google.com by `cpd5...@gmail.com` on 5 May 2014 at 3:28
|
defect
|
bracketing what steps will reproduce the problem bracketing ev seconds with bracketed shots at each what is the expected output what do you see instead when bracketing photos with a shutter speed over seconds it freezes the camera and the app i am using on samsung galaxy note kitkat and nikon in m mode original issue reported on code google com by gmail com on may at
| 1
|
131,798
| 5,166,003,119
|
IssuesEvent
|
2017-01-17 15:12:53
|
georchestra/georchestra
|
https://api.github.com/repos/georchestra/georchestra
|
closed
|
CI should also build georchestra/atlas
|
priority-top
|
https://hub.docker.com/r/georchestra/georchestra_atlas/ is 404
It is currently commented out in docker-compose.yml (for memory usage reasons)
|
1.0
|
CI should also build georchestra/atlas - https://hub.docker.com/r/georchestra/georchestra_atlas/ is 404
It is currently commented out in docker-compose.yml (for memory usage reasons)
|
non_defect
|
ci should also build georchestra atlas is it is currently commented out in docker compose yml for memory usage reasons
| 0
|
272,828
| 29,795,095,785
|
IssuesEvent
|
2023-06-16 01:10:39
|
billmcchesney1/flowgate
|
https://api.github.com/repos/billmcchesney1/flowgate
|
closed
|
CVE-2022-37599 (High) detected in loader-utils-2.0.0.tgz - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-37599 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>Path to dependency file: /ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1002.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flowgate/commit/dd01a1d4381c7a3b94ba25748c015a094c33088e">dd01a1d4381c7a3b94ba25748c015a094c33088e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37599>CVE-2022-37599</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hhq3-ff78-jv3g">https://github.com/advisories/GHSA-hhq3-ff78-jv3g</a></p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: loader-utils - 1.4.2,2.0.4,3.2.1</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-37599 (High) detected in loader-utils-2.0.0.tgz - autoclosed - ## CVE-2022-37599 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>Path to dependency file: /ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1002.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flowgate/commit/dd01a1d4381c7a3b94ba25748c015a094c33088e">dd01a1d4381c7a3b94ba25748c015a094c33088e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37599>CVE-2022-37599</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hhq3-ff78-jv3g">https://github.com/advisories/GHSA-hhq3-ff78-jv3g</a></p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: loader-utils - 1.4.2,2.0.4,3.2.1</p>
</p>
</details>
<p></p>
|
non_defect
|
cve high detected in loader utils tgz autoclosed cve high severity vulnerability vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file ui package json path to vulnerable library ui node modules loader utils package json dependency hierarchy build angular tgz root library x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the resourcepath variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution loader utils
| 0
|
38,333
| 8,766,816,766
|
IssuesEvent
|
2018-12-17 17:51:54
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
opened
|
ERV load not met by UnitarySystem
|
Defect PriorityHigh SeverityHigh
|
Issue overview
--------------
We've done a number of tests with ERVs and found a situation where the ERV load is not being met by the unitary system.
| ERV first | ERV last | No ERV
-|-|-|-
1.idf | 104.24 | 104.23 | 73.68
2.idf | 110.54 | 104.23 | 73.68
[1.idf](https://github.com/NREL/EnergyPlus/files/2687245/1.idf.txt)
[2.idf](https://github.com/NREL/EnergyPlus/files/2687246/2.idf.txt)
1.idf has a single UnitarySystem with heating and cooling coils. 2.idf has two UnitarySystems, one with just a heating coil and one with just a cooling coil. The two files give identical results if the ERV is placed last in the ZoneHVAC:EquipmentList object or if there's no ERV, but they diverge when the ERV is first in the ZoneHVAC:EquipmentList object (which is where it should be placed so that its load is met by the unitary system).
Additionally, note the the energy is unchanged in 1.idf between the ERV being first and last in the ZoneHVAC:EquipmentList list, whereas 2.idf correctly shows a difference.
Thus it appears that the ERV in 1.idf, when placed first in the ZoneHVAC:EquipmentList list, is contributing a load that is not being served by the unitary system.
### Details
Some additional details for this issue (if relevant):
- Platform: Windows 10
- Version of EnergyPlus: 9.0.1
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
ERV load not met by UnitarySystem - Issue overview
--------------
We've done a number of tests with ERVs and found a situation where the ERV load is not being met by the unitary system.
| ERV first | ERV last | No ERV
-|-|-|-
1.idf | 104.24 | 104.23 | 73.68
2.idf | 110.54 | 104.23 | 73.68
[1.idf](https://github.com/NREL/EnergyPlus/files/2687245/1.idf.txt)
[2.idf](https://github.com/NREL/EnergyPlus/files/2687246/2.idf.txt)
1.idf has a single UnitarySystem with heating and cooling coils. 2.idf has two UnitarySystems, one with just a heating coil and one with just a cooling coil. The two files give identical results if the ERV is placed last in the ZoneHVAC:EquipmentList object or if there's no ERV, but they diverge when the ERV is first in the ZoneHVAC:EquipmentList object (which is where it should be placed so that its load is met by the unitary system).
Additionally, note the the energy is unchanged in 1.idf between the ERV being first and last in the ZoneHVAC:EquipmentList list, whereas 2.idf correctly shows a difference.
Thus it appears that the ERV in 1.idf, when placed first in the ZoneHVAC:EquipmentList list, is contributing a load that is not being served by the unitary system.
### Details
Some additional details for this issue (if relevant):
- Platform: Windows 10
- Version of EnergyPlus: 9.0.1
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
erv load not met by unitarysystem issue overview we ve done a number of tests with ervs and found a situation where the erv load is not being met by the unitary system erv first erv last no erv idf idf idf has a single unitarysystem with heating and cooling coils idf has two unitarysystems one with just a heating coil and one with just a cooling coil the two files give identical results if the erv is placed last in the zonehvac equipmentlist object or if there s no erv but they diverge when the erv is first in the zonehvac equipmentlist object which is where it should be placed so that its load is met by the unitary system additionally note the the energy is unchanged in idf between the erv being first and last in the zonehvac equipmentlist list whereas idf correctly shows a difference thus it appears that the erv in idf when placed first in the zonehvac equipmentlist list is contributing a load that is not being served by the unitary system details some additional details for this issue if relevant platform windows version of energyplus checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
234,640
| 25,880,490,896
|
IssuesEvent
|
2022-12-14 10:55:16
|
rsoreq/WebGoat
|
https://api.github.com/repos/rsoreq/WebGoat
|
closed
|
CVE-2022-0536 (Medium) detected in follow-redirects-1.5.10.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-0536 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.5.10.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.5.10.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.5.10.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- localtunnel-1.9.1.tgz
- axios-0.17.1.tgz
- :x: **follow-redirects-1.5.10.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (browser-sync): 2.26.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2022-0536 (Medium) detected in follow-redirects-1.5.10.tgz - autoclosed - ## CVE-2022-0536 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.5.10.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.5.10.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.5.10.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- localtunnel-1.9.1.tgz
- axios-0.17.1.tgz
- :x: **follow-redirects-1.5.10.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (browser-sync): 2.26.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_defect
|
cve medium detected in follow redirects tgz autoclosed cve medium severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file docs package json path to vulnerable library docs node modules follow redirects package json dependency hierarchy browser sync tgz root library localtunnel tgz axios tgz x follow redirects tgz vulnerable library found in base branch develop vulnerability details exposure of sensitive information to an unauthorized actor in npm follow redirects prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution browser sync check this box to open an automated fix pr
| 0
|
432,314
| 12,490,843,948
|
IssuesEvent
|
2020-06-01 01:53:26
|
edgedb/edgedb-cli
|
https://api.github.com/repos/edgedb/edgedb-cli
|
closed
|
\psql should use our postgres/bin, not the system one
|
high-priority to do
|
```
yury> \psql
Error executing command: Error running "psql" "-h" "/Users/yury/.edgedb" "-U" "edgedb" "-p" "55043" "-d" "edgedb"
```
^ I don't have system postgres, so there's no `psql` command in my `$PATH`.
Our Python repl code does this to locate the `psql` we built as part of our vendored postgres:
```
pg_config = buildmeta.get_pg_config_path()
psql = pg_config.parent / 'psql'
```
Please find a way to replicate this functionality.
|
1.0
|
\psql should use our postgres/bin, not the system one - ```
yury> \psql
Error executing command: Error running "psql" "-h" "/Users/yury/.edgedb" "-U" "edgedb" "-p" "55043" "-d" "edgedb"
```
^ I don't have system postgres, so there's no `psql` command in my `$PATH`.
Our Python repl code does this to locate the `psql` we built as part of our vendored postgres:
```
pg_config = buildmeta.get_pg_config_path()
psql = pg_config.parent / 'psql'
```
Please find a way to replicate this functionality.
|
non_defect
|
psql should use our postgres bin not the system one yury psql error executing command error running psql h users yury edgedb u edgedb p d edgedb i don t have system postgres so there s no psql command in my path our python repl code does this to locate the psql we built as part of our vendored postgres pg config buildmeta get pg config path psql pg config parent psql please find a way to replicate this functionality
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.