Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
827,551
| 31,778,243,725
|
IssuesEvent
|
2023-09-12 15:37:48
|
Budibase/budibase
|
https://api.github.com/repos/Budibase/budibase
|
closed
|
[BUDI-7343] Issue with latest version number
|
bug avalanche linear Low priority
|
**Hosting**
<!-- Delete as appropriate -->
- Self
- Method: Digital Ocean
- Budibase Version: 2.8.22
**Describe the bug**
When updating to version 2.8.22 I noticed the new version was shown as .8.22 (missing the 2)
**Screenshots**
Before updating:
<img width="459" alt="Screenshot 2023-07-25 at 10 11 10 AM" src="https://github.com/Budibase/budibase/assets/3512179/44c16172-e171-4e47-8b49-23d84fcbdf08">
After updating:
<img width="891" alt="Screenshot 2023-07-25 at 10 13 42 AM" src="https://github.com/Budibase/budibase/assets/3512179/af4e9b71-7822-48d0-bd06-c2837fda9337">
<sub>[BUDI-7343](https://linear.app/budibase/issue/BUDI-7343/issue-with-latest-version-number)</sub>
|
1.0
|
[BUDI-7343] Issue with latest version number - **Hosting**
<!-- Delete as appropriate -->
- Self
- Method: Digital Ocean
- Budibase Version: 2.8.22
**Describe the bug**
When updating to version 2.8.22 I noticed the new version was shown as .8.22 (missing the 2)
**Screenshots**
Before updating:
<img width="459" alt="Screenshot 2023-07-25 at 10 11 10 AM" src="https://github.com/Budibase/budibase/assets/3512179/44c16172-e171-4e47-8b49-23d84fcbdf08">
After updating:
<img width="891" alt="Screenshot 2023-07-25 at 10 13 42 AM" src="https://github.com/Budibase/budibase/assets/3512179/af4e9b71-7822-48d0-bd06-c2837fda9337">
<sub>[BUDI-7343](https://linear.app/budibase/issue/BUDI-7343/issue-with-latest-version-number)</sub>
|
non_process
|
issue with latest version number hosting self method digital ocean budibase version describe the bug when updating to version i noticed the new version was shown as missing the screenshots before updating img width alt screenshot at am src after updating img width alt screenshot at am src
| 0
|
8,305
| 11,463,634,059
|
IssuesEvent
|
2020-02-07 16:22:35
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Possible unnecessary parents in symbiont branch
|
multi-species process
|
Should these be "regulation of the ACTUAL host process" (like programmed cell death) for example ? I thought not? Otherwise they become annotated to the host processes.
<img width="1035" alt="Screenshot 2020-01-29 at 16 59 52" src="https://user-images.githubusercontent.com/7359272/73379023-6939cc80-42b9-11ea-9128-c2bef1b72cba.png">
|
1.0
|
Possible unnecessary parents in symbiont branch -
Should these be "regulation of the ACTUAL host process" (like programmed cell death) for example ? I thought not? Otherwise they become annotated to the host processes.
<img width="1035" alt="Screenshot 2020-01-29 at 16 59 52" src="https://user-images.githubusercontent.com/7359272/73379023-6939cc80-42b9-11ea-9128-c2bef1b72cba.png">
|
process
|
possible unnecessary parents in symbiont branch should these be regulation of the actual host process like programmed cell death for example i thought not otherwise they become annotated to the host processes img width alt screenshot at src
| 1
|
459,413
| 13,192,716,470
|
IssuesEvent
|
2020-08-13 14:11:20
|
arfc/arfc.github.io
|
https://api.github.com/repos/arfc/arfc.github.io
|
opened
|
Add yourself to the website
|
Comp:Core Difficulty:1-Beginner Priority:2-Normal Status:3-Selected Type:Docs Type:Feature
|
## I'm submitting a ...
- [x] feature request
## Expected Behavior
All ARFC grad students should appear on the current people page: http://arfc.github.io/people/
## Actual Behavior
Amanda Bachmann doesn't appear yet.
## Steps to Reproduce the Problem
Go to http://arfc.npre.illinois.edu/people/ and observe.
## How can this issue be closed?
- [ ] @abachma2 should follow the instructions here: http://arfc.npre.illinois.edu/manual/guides/website
- [ ] Put your data in the _people.yml file in the section for graduate student data. You can use the "You" section as a template.
- [ ] Make a Pull request to this repository.
- [ ] Once you make a Pull Request to this website, @katyhuff will review it and merge it into the website!
- [ ] This issue can be closed when @amberhhunter appears on the current people page: http://arfc.github.io/people/
|
1.0
|
Add yourself to the website - ## I'm submitting a ...
- [x] feature request
## Expected Behavior
All ARFC grad students should appear on the current people page: http://arfc.github.io/people/
## Actual Behavior
Amanda Bachmann doesn't appear yet.
## Steps to Reproduce the Problem
Go to http://arfc.npre.illinois.edu/people/ and observe.
## How can this issue be closed?
- [ ] @abachma2 should follow the instructions here: http://arfc.npre.illinois.edu/manual/guides/website
- [ ] Put your data in the _people.yml file in the section for graduate student data. You can use the "You" section as a template.
- [ ] Make a Pull request to this repository.
- [ ] Once you make a Pull Request to this website, @katyhuff will review it and merge it into the website!
- [ ] This issue can be closed when @amberhhunter appears on the current people page: http://arfc.github.io/people/
|
non_process
|
add yourself to the website i m submitting a feature request expected behavior all arfc grad students should appear on the current people page actual behavior amanda bachmann doesn t appear yet steps to reproduce the problem go to and observe how can this issue be closed should follow the instructions here put your data in the people yml file in the section for graduate student data you can use the you section as a template make a pull request to this repository once you make a pull request to this website katyhuff will review it and merge it into the website this issue can be closed when amberhhunter appears on the current people page
| 0
|
97,993
| 8,673,897,019
|
IssuesEvent
|
2018-11-30 04:54:00
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
FXLabs Testing : ApiV1EventsOrgeventsGetPathParamPagesizeMysqlSqlInjectionTimebound
|
FXLabs Testing
|
Project : FXLabs Testing
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YmU1MDFjMDMtMDNkYy00NjA5LThiZDktZWMzMWFjM2JkNGE1; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 04:46:34 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/events/orgevents?pageSize=
Request :
Response :
{
"timestamp" : "2018-11-30T04:46:34.736+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/events/orgevents"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [443 < 7000 OR 443 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
1.0
|
FXLabs Testing : ApiV1EventsOrgeventsGetPathParamPagesizeMysqlSqlInjectionTimebound - Project : FXLabs Testing
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YmU1MDFjMDMtMDNkYy00NjA5LThiZDktZWMzMWFjM2JkNGE1; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 04:46:34 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/events/orgevents?pageSize=
Request :
Response :
{
"timestamp" : "2018-11-30T04:46:34.736+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/events/orgevents"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [443 < 7000 OR 443 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
non_process
|
fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api events orgevents logs assertion resolved to result assertion resolved to result fx bot
| 0
|
24,258
| 11,017,080,492
|
IssuesEvent
|
2019-12-05 07:28:26
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
closed
|
Update + Upgrade dex
|
area/security good first issue
|
<!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
We have quite old dex charts in kyma, plus we have "old" dex release (2.9 current 2.16).
It's time to update that ;-) But we have to keep our changes (look a the configmap) compatible
AC:
- Updated dex charts based on the stable helm repository
- Upgraded dex image
<!-- Provide a clear and concise description of the feature. -->
**Reasons**
Keep up to date with dex bugfixes and features, use more up to date charts.
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
|
True
|
Update + Upgrade dex - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
We have quite old dex charts in kyma, plus we have "old" dex release (2.9 current 2.16).
It's time to update that ;-) But we have to keep our changes (look a the configmap) compatible
AC:
- Updated dex charts based on the stable helm repository
- Upgraded dex image
<!-- Provide a clear and concise description of the feature. -->
**Reasons**
Keep up to date with dex bugfixes and features, use more up to date charts.
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
|
non_process
|
update upgrade dex thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description we have quite old dex charts in kyma plus we have old dex release current it s time to update that but we have to keep our changes look a the configmap compatible ac updated dex charts based on the stable helm repository upgraded dex image reasons keep up to date with dex bugfixes and features use more up to date charts attachments
| 0
|
172,677
| 21,054,721,522
|
IssuesEvent
|
2022-04-01 01:09:08
|
vipinsun/cactus
|
https://api.github.com/repos/vipinsun/cactus
|
opened
|
CVE-2022-22965 (High) detected in spring-beans-5.2.0.M2.jar
|
security vulnerability
|
## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.2.0.M2.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /packages/cactus-plugin-ledger-connector-corda/src/main-server/kotlin/gen/kotlin-spring/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.2.0.M2/spring-beans-5.2.0.M2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.2.0.M2/c4aa2bb803602ebc26a7ee47628f6af106e1bf55/spring-beans-5.2.0.M2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.0.M3.jar (Root Library)
- spring-web-5.2.0.M2.jar
- :x: **spring-beans-5.2.0.M2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22965 (High) detected in spring-beans-5.2.0.M2.jar - ## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.2.0.M2.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /packages/cactus-plugin-ledger-connector-corda/src/main-server/kotlin/gen/kotlin-spring/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.2.0.M2/spring-beans-5.2.0.M2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.2.0.M2/c4aa2bb803602ebc26a7ee47628f6af106e1bf55/spring-beans-5.2.0.M2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.0.M3.jar (Root Library)
- spring-web-5.2.0.M2.jar
- :x: **spring-beans-5.2.0.M2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in spring beans jar cve high severity vulnerability vulnerable library spring beans jar spring beans library home page a href path to dependency file packages cactus plugin ledger connector corda src main server kotlin gen kotlin spring pom xml path to vulnerable library home wss scanner repository org springframework spring beans spring beans jar home wss scanner gradle caches modules files org springframework spring beans spring beans jar dependency hierarchy spring boot starter web jar root library spring web jar x spring beans jar vulnerable library found in base branch master vulnerability details spring framework before and x before are vulnerable due to a vulnerability in spring beans which allows attackers under certain circumstances to achieve remote code execution this vulnerability is also known as ״ ״ or ״springshell״ the current poc related to the attack is done by creating a specially crafted request which manipulates classloader to successfully achieve rce remote code execution please note that the ease of exploitation may diverge by the code implementation currently the exploit requires jdk or higher apache tomcat as the servlet container the application packaged as war and dependency on spring webmvc or spring webflux spring framework and have already been released whitesource s research team is carefully observing developments and researching the case we will keep updating this page and our whitesource resources with updates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring beans release step up your open source security game with whitesource
| 0
|
12,372
| 14,896,360,336
|
IssuesEvent
|
2021-01-21 10:18:06
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Narrow type for `rejectOnNotFound`
|
kind/improvement process/candidate team/client
|
As of now, the return type of `findFirst` is still `User | null`, even when `rejectOnNotFound` is used.
We should narrow the type down to only `User`.
|
1.0
|
Narrow type for `rejectOnNotFound` - As of now, the return type of `findFirst` is still `User | null`, even when `rejectOnNotFound` is used.
We should narrow the type down to only `User`.
|
process
|
narrow type for rejectonnotfound as of now the return type of findfirst is still user null even when rejectonnotfound is used we should narrow the type down to only user
| 1
|
26,119
| 2,684,180,309
|
IssuesEvent
|
2015-03-28 18:43:37
|
ConEmu/old-issues
|
https://api.github.com/repos/ConEmu/old-issues
|
opened
|
ConsoleDetachKey does not work correctly on the 1st console
|
2–5 stars bug imported Priority-Medium
|
_From [petr.ne...@mercurysolutions.eu](https://code.google.com/u/109079169662918468214/) on September 20, 2012 02:38:35_
Required information! OS version: Win7 SP1 x64 ConEmu version: ConEmuPack.120916.7z
Far version (if you are using Far Manager): Far30b2798.x64.20120912.7z *Bug description* I configured ConsoleDetachKey as CtrlAltX and when I run some command and press CtrlAltX the new console is opened and the command is displayed on the 1st console but is not producing any output. *Steps to reproduction* 1. Start ConEmu 2. Start e.g. ping www.google.com -t command
3. Press the console detach key (e.g. CtrlAltX)
4. Select back the 1st console window (the command is probably still running but not producing any output) - this is the issue
5 . Go back to the 2nd console window
6. Start e.g. ping www.google.com -t command again
7. Press the console detach key again (e.g. CtrlAltX)
8. Select the 2nd console window, the ping is still producing output - this is ok
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=713_
|
1.0
|
ConsoleDetachKey does not work correctly on the 1st console - _From [petr.ne...@mercurysolutions.eu](https://code.google.com/u/109079169662918468214/) on September 20, 2012 02:38:35_
Required information! OS version: Win7 SP1 x64 ConEmu version: ConEmuPack.120916.7z
Far version (if you are using Far Manager): Far30b2798.x64.20120912.7z *Bug description* I configured ConsoleDetachKey as CtrlAltX and when I run some command and press CtrlAltX the new console is opened and the command is displayed on the 1st console but is not producing any output. *Steps to reproduction* 1. Start ConEmu 2. Start e.g. ping www.google.com -t command
3. Press the console detach key (e.g. CtrlAltX)
4. Select back the 1st console window (the command is probably still running but not producing any output) - this is the issue
5 . Go back to the 2nd console window
6. Start e.g. ping www.google.com -t command again
7. Press the console detach key again (e.g. CtrlAltX)
8. Select the 2nd console window, the ping is still producing output - this is ok
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=713_
|
non_process
|
consoledetachkey does not work correctly on the console from on september required information os version conemu version conemupack far version if you are using far manager bug description i configured consoledetachkey as ctrlaltx and when i run some command and press ctrlaltx the new console is opened and the command is displayed on the console but is not producing any output steps to reproduction start conemu start e g ping t command press the console detach key e g ctrlaltx select back the console window the command is probably still running but not producing any output this is the issue go back to the console window start e g ping t command again press the console detach key again e g ctrlaltx select the console window the ping is still producing output this is ok original issue
| 0
|
13,649
| 16,358,744,011
|
IssuesEvent
|
2021-05-14 05:37:21
|
Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
|
https://api.github.com/repos/Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
|
closed
|
Updated code to use CSV
|
coding data processing help wanted
|
- Blank column in raw data that emerges, seems to be column X. How can I search Dataframe and remove empty column?
- Uploading in CSV loose leading zeros. Have changed format of columns in csv (ex 0000), however how can I do this in R?
- I've tried a few things but converting dates doesn't seem to be working. Closest I got was to have days with 4 figures (0008 for 8th)
- BEC still an issue have downloaded UN file and then mapped to our coding structure and definitions, there are duplicates though which mean when merge observations jumps up. I will go back and get rid of duplicates and re-try
- Code for missing observations not working
- Checking the confidence intervals isn't working properly
Code available branch [automated-highlights-report-new](https://github.com/Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials/tree/automated-highlights-report-new).
|
1.0
|
Updated code to use CSV - - Blank column in raw data that emerges, seems to be column X. How can I search Dataframe and remove empty column?
- Uploading in CSV loose leading zeros. Have changed format of columns in csv (ex 0000), however how can I do this in R?
- I've tried a few things but converting dates doesn't seem to be working. Closest I got was to have days with 4 figures (0008 for 8th)
- BEC still an issue have downloaded UN file and then mapped to our coding structure and definitions, there are duplicates though which mean when merge observations jumps up. I will go back and get rid of duplicates and re-try
- Code for missing observations not working
- Checking the confidence intervals isn't working properly
Code available branch [automated-highlights-report-new](https://github.com/Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials/tree/automated-highlights-report-new).
|
process
|
updated code to use csv blank column in raw data that emerges seems to be column x how can i search dataframe and remove empty column uploading in csv loose leading zeros have changed format of columns in csv ex however how can i do this in r i ve tried a few things but converting dates doesn t seem to be working closest i got was to have days with figures for bec still an issue have downloaded un file and then mapped to our coding structure and definitions there are duplicates though which mean when merge observations jumps up i will go back and get rid of duplicates and re try code for missing observations not working checking the confidence intervals isn t working properly code available branch
| 1
|
15,671
| 19,847,336,485
|
IssuesEvent
|
2022-01-21 08:21:05
|
ooi-data/CE09OSSM-SBD12-05-WAVSSA000-telemetered-wavss_a_dcl_motion
|
https://api.github.com/repos/ooi-data/CE09OSSM-SBD12-05-WAVSSA000-telemetered-wavss_a_dcl_motion
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:21:04.782320.
## Details
Flow name: `CE09OSSM-SBD12-05-WAVSSA000-telemetered-wavss_a_dcl_motion`
Task name: `processing_task`
Error type: `ValueError`
Error message: conflicting sizes for dimension 'wavss_a_buoymotion_time_dim_0': length 4146 on 'wavss_a_buoymotion_time_dim_0' and length 1382 on {'time': 'date_string', 'wavss_move': 'east_offset_array', 'wavss_a_buoymotion_time_dim_0': 'wavss_a_buoymotion_time'}
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 77, in finalize_data_stream
temp_ds = xr.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/api.py", line 495, in open_dataset
backend_ds = backend.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 838, in open_dataset
ds = store_entrypoint.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/store.py", line 39, in open_dataset
ds = Dataset(vars, attrs=attrs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 751, in __init__
variables, coord_names, dims, indexes, _ = merge_data_and_coords(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/merge.py", line 488, in merge_data_and_coords
return merge_core(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/merge.py", line 645, in merge_core
dims = calculate_dimensions(variables)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 205, in calculate_dimensions
raise ValueError(
ValueError: conflicting sizes for dimension 'wavss_a_buoymotion_time_dim_0': length 4146 on 'wavss_a_buoymotion_time_dim_0' and length 1382 on {'time': 'date_string', 'wavss_move': 'east_offset_array', 'wavss_a_buoymotion_time_dim_0': 'wavss_a_buoymotion_time'}
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:21:04.782320.
## Details
Flow name: `CE09OSSM-SBD12-05-WAVSSA000-telemetered-wavss_a_dcl_motion`
Task name: `processing_task`
Error type: `ValueError`
Error message: conflicting sizes for dimension 'wavss_a_buoymotion_time_dim_0': length 4146 on 'wavss_a_buoymotion_time_dim_0' and length 1382 on {'time': 'date_string', 'wavss_move': 'east_offset_array', 'wavss_a_buoymotion_time_dim_0': 'wavss_a_buoymotion_time'}
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 77, in finalize_data_stream
temp_ds = xr.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/api.py", line 495, in open_dataset
backend_ds = backend.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 838, in open_dataset
ds = store_entrypoint.open_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/store.py", line 39, in open_dataset
ds = Dataset(vars, attrs=attrs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 751, in __init__
variables, coord_names, dims, indexes, _ = merge_data_and_coords(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/merge.py", line 488, in merge_data_and_coords
return merge_core(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/merge.py", line 645, in merge_core
dims = calculate_dimensions(variables)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 205, in calculate_dimensions
raise ValueError(
ValueError: conflicting sizes for dimension 'wavss_a_buoymotion_time_dim_0': length 4146 on 'wavss_a_buoymotion_time_dim_0' and length 1382 on {'time': 'date_string', 'wavss_move': 'east_offset_array', 'wavss_a_buoymotion_time_dim_0': 'wavss_a_buoymotion_time'}
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name telemetered wavss a dcl motion task name processing task error type valueerror error message conflicting sizes for dimension wavss a buoymotion time dim length on wavss a buoymotion time dim and length on time date string wavss move east offset array wavss a buoymotion time dim wavss a buoymotion time traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream temp ds xr open dataset file srv conda envs notebook lib site packages xarray backends api py line in open dataset backend ds backend open dataset file srv conda envs notebook lib site packages xarray backends zarr py line in open dataset ds store entrypoint open dataset file srv conda envs notebook lib site packages xarray backends store py line in open dataset ds dataset vars attrs attrs file srv conda envs notebook lib site packages xarray core dataset py line in init variables coord names dims indexes merge data and coords file srv conda envs notebook lib site packages xarray core merge py line in merge data and coords return merge core file srv conda envs notebook lib site packages xarray core merge py line in merge core dims calculate dimensions variables file srv conda envs notebook lib site packages xarray core dataset py line in calculate dimensions raise valueerror valueerror conflicting sizes for dimension wavss a buoymotion time dim length on wavss a buoymotion time dim and length on time date string wavss move east offset array wavss a buoymotion time dim wavss a buoymotion time
| 1
|
39,703
| 10,373,563,842
|
IssuesEvent
|
2019-09-09 07:36:07
|
widelands/widelands-issue-migration2
|
https://api.github.com/repos/widelands/widelands-issue-migration2
|
opened
|
CMake cleanup necessary
|
Fix Released Wishlist buildsystem cmake
|
CMake needs a couple of (sanity) checks for current guesses, a few status messages regarding current settings, and a general cleanup of file internal structure and filesystem structure.
|
1.0
|
CMake cleanup necessary - CMake needs a couple of (sanity) checks for current guesses, a few status messages regarding current settings, and a general cleanup of file internal structure and filesystem structure.
|
non_process
|
cmake cleanup necessary cmake needs a couple of sanity checks for current guesses a few status messages regarding current settings and a general cleanup of file internal structure and filesystem structure
| 0
|
4,739
| 7,202,777,469
|
IssuesEvent
|
2018-02-06 06:10:19
|
bwsw/cloudstack-ui
|
https://api.github.com/repos/bwsw/cloudstack-ui
|
reopened
|
Add additional fields to SO chooser component in VM creation
|
Epic requirement
|
This issue is a continuation of #502
**Acceptance criteria**
1. Characteristics should be separated in "main" (основные) - was done in #502 and additional (дополнительные)
2. A table should contain main characteristics of each service offering. Additional characteristics should open only when needed
- Main characteristics:
-- CPU Cores (Ядра CPU)
-- CPU (MHz) (CPU (МГц))
-- Memory (Память (МБ))
-- Network Rate (Mb/s) (скорость сети)
**- Additional characteristics: all the rest from original CS interface**
3. **A button "Show additional fields" ("Дополнительные характеристики")** should be placed above cancel and save(change)
4. If the button is clicked - additional columns should appear on the right side, user can see it by scrolling
5. Document this feature in Custom features local document and in Config Guide
**Pay attention to units of measure!** (not KB, but MB etc)
**Connected feature ID:**
- _vm_creation_so_
- _vm_creation_so_custom_
- _vm_SO_display_
- _vm_SO_change_
|
1.0
|
Add additional fields to SO chooser component in VM creation - This issue is a continuation of #502
**Acceptance criteria**
1. Characteristics should be separated in "main" (основные) - was done in #502 and additional (дополнительные)
2. A table should contain main characteristics of each service offering. Additional characteristics should open only when needed
- Main characteristics:
-- CPU Cores (Ядра CPU)
-- CPU (MHz) (CPU (МГц))
-- Memory (Память (МБ))
-- Network Rate (Mb/s) (скорость сети)
**- Additional characteristics: all the rest from original CS interface**
3. **A button "Show additional fields" ("Дополнительные характеристики")** should be placed above cancel and save(change)
4. If the button is clicked - additional columns should appear on the right side, user can see it by scrolling
5. Document this feature in Custom features local document and in Config Guide
**Pay attention to units of measure!** (not KB, but MB etc)
**Connected feature ID:**
- _vm_creation_so_
- _vm_creation_so_custom_
- _vm_SO_display_
- _vm_SO_change_
|
non_process
|
add additional fields to so chooser component in vm creation this issue is a continuation of acceptance criteria characteristics should be separated in main основные was done in and additional дополнительные a table should contain main characteristics of each service offering additional characteristics should open only when needed main characteristics cpu cores ядра cpu cpu mhz cpu мгц memory память мб network rate mb s скорость сети additional characteristics all the rest from original cs interface a button show additional fields дополнительные характеристики should be placed above cancel and save change if the button is clicked additional columns should appear on the right side user can see it by scrolling document this feature in custom features local document and in config guide pay attention to units of measure not kb but mb etc connected feature id vm creation so vm creation so custom vm so display vm so change
| 0
|
188,555
| 15,164,636,911
|
IssuesEvent
|
2021-02-12 14:01:06
|
kinvolk/lokomotive
|
https://api.github.com/repos/kinvolk/lokomotive
|
opened
|
Create a document for recycling rook-ceph storage pool
|
area/storage kind/documentation
|
This document should show how to bring up new worker pool, let the data synchornise, delete the OSDs, delete the MONS, delete the MGRs and once everything is on the new worker pools then delete the older nodes.
|
1.0
|
Create a document for recycling rook-ceph storage pool - This document should show how to bring up new worker pool, let the data synchornise, delete the OSDs, delete the MONS, delete the MGRs and once everything is on the new worker pools then delete the older nodes.
|
non_process
|
create a document for recycling rook ceph storage pool this document should show how to bring up new worker pool let the data synchornise delete the osds delete the mons delete the mgrs and once everything is on the new worker pools then delete the older nodes
| 0
|
21,223
| 28,310,429,648
|
IssuesEvent
|
2023-04-10 14:57:05
|
TUM-Dev/NavigaTUM
|
https://api.github.com/repos/TUM-Dev/NavigaTUM
|
opened
|
[Entry] [5532.02.211]: WC inconsistency
|
entry webform delete-after-processing
|
Signed as a Damen-WC. The WC itself can be designated as gender neutral, but as long as the sign remains Damen, it should be entered as such.
|
1.0
|
[Entry] [5532.02.211]: WC inconsistency - Signed as a Damen-WC. The WC itself can be designated as gender neutral, but as long as the sign remains Damen, it should be entered as such.
|
process
|
wc inconsistency signed as a damen wc the wc itself can be designated as gender neutral but as long as the sign remains damen it should be entered as such
| 1
|
18,290
| 24,393,260,261
|
IssuesEvent
|
2022-10-04 16:54:37
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Defining nested Tail-based sampling policies in OpenTelemetry Collector
|
waiting for author processor/tailsampling
|
When performing tail-based sampling with various services, there is a normally a need to sample traces based on a combination of policies like latency/string attribute exclusion/error status in spans etc. These conditions can be defined using `and` / `or` conditions.
For example: Let's say we need to build tail-based sampling policies for services A and B based on below conditions:
**Policy 1** - For service A, sampling is done either on the string attribute `should_sample` as `yes` or on the `latency>3000 ms`.
```
(
(Service == A)
AND
((latency > 3000ms) OR (string_attribute 'should_sample' == 'yes'))
)
```
**Policy 2 -** For service B , sampling is done either on the string attribute `should_sample` as `no` or (`latency>4000ms `and containing an `error` status code).
```
(
(Service = B)
AND
(
((latency > 4000) AND (status_code = ERROR))
OR
(string_attribute 'should_sample' == 'no')
)
)
```
I am unable to implement these policies using `and` policies.
Is it currently possible to implement this using the existing policies in the OpenTelemetry Collector config? Any suggestions/alternatives to try?
|
1.0
|
Defining nested Tail-based sampling policies in OpenTelemetry Collector - When performing tail-based sampling with various services, there is a normally a need to sample traces based on a combination of policies like latency/string attribute exclusion/error status in spans etc. These conditions can be defined using `and` / `or` conditions.
For example: Let's say we need to build tail-based sampling policies for services A and B based on below conditions:
**Policy 1** - For service A, sampling is done either on the string attribute `should_sample` as `yes` or on the `latency>3000 ms`.
```
(
(Service == A)
AND
((latency > 3000ms) OR (string_attribute 'should_sample' == 'yes'))
)
```
**Policy 2 -** For service B , sampling is done either on the string attribute `should_sample` as `no` or (`latency>4000ms `and containing an `error` status code).
```
(
(Service = B)
AND
(
((latency > 4000) AND (status_code = ERROR))
OR
(string_attribute 'should_sample' == 'no')
)
)
```
I am unable to implement these policies using `and` policies.
Is it currently possible to implement this using the existing policies in the OpenTelemetry Collector config? Any suggestions/alternatives to try?
|
process
|
defining nested tail based sampling policies in opentelemetry collector when performing tail based sampling with various services there is a normally a need to sample traces based on a combination of policies like latency string attribute exclusion error status in spans etc these conditions can be defined using and or conditions for example let s say we need to build tail based sampling policies for services a and b based on below conditions policy for service a sampling is done either on the string attribute should sample as yes or on the latency ms service a and latency or string attribute should sample yes policy for service b sampling is done either on the string attribute should sample as no or latency and containing an error status code service b and latency and status code error or string attribute should sample no i am unable to implement these policies using and policies is it currently possible to implement this using the existing policies in the opentelemetry collector config any suggestions alternatives to try
| 1
|
6,366
| 9,417,643,353
|
IssuesEvent
|
2019-04-10 17:14:05
|
pelias/whosonfirst
|
https://api.github.com/repos/pelias/whosonfirst
|
closed
|
Gracefully handle git-lfs errors
|
processed
|
While the [installation docs](http://pelias.io/install.html) now suggest using the downloader script, some users will continue to clone whosonfirst data using git (either because they have a good reason or because the docs used to tell them to do it that way). One confusing aspect of the git clone approach is that without [git-lfs](https://git-lfs.github.com/) installed, many files in a clone from github are incomplete, and only contain the tiny bit of metadata needed for git-lfs to fetch them. This makes them essentially corrupted in the eyes of our importers..
The importer [already does this](https://github.com/pelias/whosonfirst/blob/master/src/components/loadJSON.js#L17-L19) for the `geojson` files, but not the meta files, which get interpreted as CSVs with zero records.
All files read from whosonfirst data first have to be checked to see if they are a git-lfs file, and in that case a friendly, actionable warning should be emitted, and the importer should stop.
Note: this is related to, but _not_ a duplicate of https://github.com/pelias/wof-pip-service/issues/68
|
1.0
|
Gracefully handle git-lfs errors - While the [installation docs](http://pelias.io/install.html) now suggest using the downloader script, some users will continue to clone whosonfirst data using git (either because they have a good reason or because the docs used to tell them to do it that way). One confusing aspect of the git clone approach is that without [git-lfs](https://git-lfs.github.com/) installed, many files in a clone from github are incomplete, and only contain the tiny bit of metadata needed for git-lfs to fetch them. This makes them essentially corrupted in the eyes of our importers..
The importer [already does this](https://github.com/pelias/whosonfirst/blob/master/src/components/loadJSON.js#L17-L19) for the `geojson` files, but not the meta files, which get interpreted as CSVs with zero records.
All files read from whosonfirst data first have to be checked to see if they are a git-lfs file, and in that case a friendly, actionable warning should be emitted, and the importer should stop.
Note: this is related to, but _not_ a duplicate of https://github.com/pelias/wof-pip-service/issues/68
|
process
|
gracefully handle git lfs errors while the now suggest using the downloader script some users will continue to clone whosonfirst data using git either because they have a good reason or because the docs used to tell them to do it that way one confusing aspect of the git clone approach is that without installed many files in a clone from github are incomplete and only contain the tiny bit of metadata needed for git lfs to fetch them this makes them essentially corrupted in the eyes of our importers the importer for the geojson files but not the meta files which get interpreted as csvs with zero records all files read from whosonfirst data first have to be checked to see if they are a git lfs file and in that case a friendly actionable warning should be emitted and the importer should stop note this is related to but not a duplicate of
| 1
|
174,739
| 13,508,784,030
|
IssuesEvent
|
2020-09-14 08:15:32
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: tpchvec/perf failed
|
C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker
|
[(roachtest).tpchvec/perf failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2267783&tab=buildLog) on [release-20.1@310ba51c4c75d351c5fdb7ec98f57be21b45b751](https://github.com/cockroachdb/cockroach/commits/310ba51c4c75d351c5fdb7ec98f57be21b45b751):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/tpchvec/perf/run_1
tpchvec.go:375,tpchvec.go:680,tpchvec.go:692,test_runner.go:754: unexpectedly didn't find a line with "Direct link: " prefix in EXPLAIN ANALYZE (DEBUG) output
```
<details><summary>More</summary><p>
Artifacts: [/tpchvec/perf](https://teamcity.cockroachdb.com/viewLog.html?buildId=2267783&tab=artifacts#/tpchvec/perf)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpchvec%2Fperf.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: tpchvec/perf failed - [(roachtest).tpchvec/perf failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2267783&tab=buildLog) on [release-20.1@310ba51c4c75d351c5fdb7ec98f57be21b45b751](https://github.com/cockroachdb/cockroach/commits/310ba51c4c75d351c5fdb7ec98f57be21b45b751):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/tpchvec/perf/run_1
tpchvec.go:375,tpchvec.go:680,tpchvec.go:692,test_runner.go:754: unexpectedly didn't find a line with "Direct link: " prefix in EXPLAIN ANALYZE (DEBUG) output
```
<details><summary>More</summary><p>
Artifacts: [/tpchvec/perf](https://teamcity.cockroachdb.com/viewLog.html?buildId=2267783&tab=artifacts#/tpchvec/perf)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpchvec%2Fperf.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest tpchvec perf failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts tpchvec perf run tpchvec go tpchvec go tpchvec go test runner go unexpectedly didn t find a line with direct link prefix in explain analyze debug output more artifacts powered by
| 0
|
5,273
| 8,059,855,230
|
IssuesEvent
|
2018-08-03 00:12:49
|
GoogleCloudPlatform/cloud-debug-nodejs
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-debug-nodejs
|
closed
|
Fix flaky tests
|
priority: p1 release blocking type: process
|
Determine the root cause of test flakiness. The flakiness of the tests is currently the main limiting factor causing PRs not to land.
|
1.0
|
Fix flaky tests - Determine the root cause of test flakiness. The flakiness of the tests is currently the main limiting factor causing PRs not to land.
|
process
|
fix flaky tests determine the root cause of test flakiness the flakiness of the tests is currently the main limiting factor causing prs not to land
| 1
|
152,482
| 5,847,829,941
|
IssuesEvent
|
2017-05-10 19:26:12
|
uclaradio/uclaradio
|
https://api.github.com/repos/uclaradio/uclaradio
|
closed
|
Pledge Drive Countdown
|
feature High Priority in progress
|
Add a nifty lil' countdown to Pledge Drive in our top banner (where SoTM would be). Pledge Drive promotion is hard launching May 1st, so it needs to be done by then.
|
1.0
|
Pledge Drive Countdown - Add a nifty lil' countdown to Pledge Drive in our top banner (where SoTM would be). Pledge Drive promotion is hard launching May 1st, so it needs to be done by then.
|
non_process
|
pledge drive countdown add a nifty lil countdown to pledge drive in our top banner where sotm would be pledge drive promotion is hard launching may so it needs to be done by then
| 0
|
151,587
| 23,845,126,052
|
IssuesEvent
|
2022-09-06 13:30:17
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Design] My VA Audit UX Improvements: Updates to Static Healthcare Links
|
design authenticated-experience my-va-audit
|
## Background
The conditional logic that drives the static links under the Healthcare Link section in My VA needs to be reevaluated. We decided to always show the static healthcare links.
- "schedule and view your appointments", and "view your prescriptions" should always be shown as long as the user has healthcare through the VA.
- Consolidate "Get your VA lab results" and "view your lab results" links
- Remove "send a secure message" link
## Tasks
- [x] Update conditional logic and design
- [x] Review with content team
- [x] Review changes with team
## Acceptance Criteria
- [x] Healthcare static link section design is updated
## Resources
- [current test cases](https://github.com/department-of-veterans-affairs/va.gov-team-sensitive/blob/master/Administrative/vagov-users/staging-test-accounts-myvaaudit.md)
|
1.0
|
[Design] My VA Audit UX Improvements: Updates to Static Healthcare Links - ## Background
The conditional logic that drives the static links under the Healthcare Link section in My VA needs to be reevaluated. We decided to always show the static healthcare links.
- "schedule and view your appointments", and "view your prescriptions" should always be shown as long as the user has healthcare through the VA.
- Consolidate "Get your VA lab results" and "view your lab results" links
- Remove "send a secure message" link
## Tasks
- [x] Update conditional logic and design
- [x] Review with content team
- [x] Review changes with team
## Acceptance Criteria
- [x] Healthcare static link section design is updated
## Resources
- [current test cases](https://github.com/department-of-veterans-affairs/va.gov-team-sensitive/blob/master/Administrative/vagov-users/staging-test-accounts-myvaaudit.md)
|
non_process
|
my va audit ux improvements updates to static healthcare links background the conditional logic that drives the static links under the healthcare link section in my va needs to be reevaluated we decided to always show the static healthcare links schedule and view your appointments and view your prescriptions should always be shown as long as the user has healthcare through the va consolidate get your va lab results and view your lab results links remove send a secure message link tasks update conditional logic and design review with content team review changes with team acceptance criteria healthcare static link section design is updated resources
| 0
|
19,707
| 26,053,445,665
|
IssuesEvent
|
2022-12-22 21:26:48
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Interface de passos com Vue.js - Limpeza do código
|
[0] Desenvolvimento [2] Média Prioridade [1] Aprimoramento [3] Processamento Dinâmico
|
## Comportamento Esperado
Deve ser feita uma "limpeza" no código da interface utilizando o Vue, para garantir que nenhum código anterior desnecessário permaneça. Além disso, alguns aprimoramentos não diretamente decorrentes da tradução podem ser feitos. Por fim, queremos que boas práticas sejam seguidas, para que o novo código seja de fácil manutenção futura.
## Comportamento Atual
Algumas questões precisam ser tratadas:
- Arquivos `steps.js` e `step_block.js` ainda estão no repositório, para permitir estudar a versão anterior da interface durante a refatoração. Idealmente queremos remover esses arquivos ao fim do processo.
- Lista de passos é carregada através do arquivo `steps_signature.json`, e ainda sim alguns passos são adicionados manualmente no Javascript. Idealmente esses passos devem vir todos da mesma fonte sem necessidade de intervenção adicional. Além disso, a lista de passos é armazenada globalmente como uma propriedade da variável `window`, o que não é uma boa ideia de forma geral.
- A função `param_to_placeholder` trata os placeholders de alguns campos específicos. Assim como no item anterior, gostaríamos de ter uma única fonte para esses dados, e evitar intervenções adicionais desnecessárias. Tratando o item anterior e esse, poderíamos considerar a possibilidade de remover o arquivo `utils.js`.
- Componente Step tem muitos eventos (para tratar as alterações possíveis em passos e parâmetros), não sei se tem alguma prática comum de Vue que melhora isso ou se podemos manter dessa forma.
- Alguns elementos na interface não estão bem encaixados visualmente (em especial os parâmetros de passos).
## Sistema
Branch `issue-882`.
|
1.0
|
Interface de passos com Vue.js - Limpeza do código - ## Comportamento Esperado
Deve ser feita uma "limpeza" no código da interface utilizando o Vue, para garantir que nenhum código anterior desnecessário permaneça. Além disso, alguns aprimoramentos não diretamente decorrentes da tradução podem ser feitos. Por fim, queremos que boas práticas sejam seguidas, para que o novo código seja de fácil manutenção futura.
## Comportamento Atual
Algumas questões precisam ser tratadas:
- Arquivos `steps.js` e `step_block.js` ainda estão no repositório, para permitir estudar a versão anterior da interface durante a refatoração. Idealmente queremos remover esses arquivos ao fim do processo.
- Lista de passos é carregada através do arquivo `steps_signature.json`, e ainda sim alguns passos são adicionados manualmente no Javascript. Idealmente esses passos devem vir todos da mesma fonte sem necessidade de intervenção adicional. Além disso, a lista de passos é armazenada globalmente como uma propriedade da variável `window`, o que não é uma boa ideia de forma geral.
- A função `param_to_placeholder` trata os placeholders de alguns campos específicos. Assim como no item anterior, gostaríamos de ter uma única fonte para esses dados, e evitar intervenções adicionais desnecessárias. Tratando o item anterior e esse, poderíamos considerar a possibilidade de remover o arquivo `utils.js`.
- Componente Step tem muitos eventos (para tratar as alterações possíveis em passos e parâmetros), não sei se tem alguma prática comum de Vue que melhora isso ou se podemos manter dessa forma.
- Alguns elementos na interface não estão bem encaixados visualmente (em especial os parâmetros de passos).
## Sistema
Branch `issue-882`.
|
process
|
interface de passos com vue js limpeza do código comportamento esperado deve ser feita uma limpeza no código da interface utilizando o vue para garantir que nenhum código anterior desnecessário permaneça além disso alguns aprimoramentos não diretamente decorrentes da tradução podem ser feitos por fim queremos que boas práticas sejam seguidas para que o novo código seja de fácil manutenção futura comportamento atual algumas questões precisam ser tratadas arquivos steps js e step block js ainda estão no repositório para permitir estudar a versão anterior da interface durante a refatoração idealmente queremos remover esses arquivos ao fim do processo lista de passos é carregada através do arquivo steps signature json e ainda sim alguns passos são adicionados manualmente no javascript idealmente esses passos devem vir todos da mesma fonte sem necessidade de intervenção adicional além disso a lista de passos é armazenada globalmente como uma propriedade da variável window o que não é uma boa ideia de forma geral a função param to placeholder trata os placeholders de alguns campos específicos assim como no item anterior gostaríamos de ter uma única fonte para esses dados e evitar intervenções adicionais desnecessárias tratando o item anterior e esse poderíamos considerar a possibilidade de remover o arquivo utils js componente step tem muitos eventos para tratar as alterações possíveis em passos e parâmetros não sei se tem alguma prática comum de vue que melhora isso ou se podemos manter dessa forma alguns elementos na interface não estão bem encaixados visualmente em especial os parâmetros de passos sistema branch issue
| 1
|
574
| 3,037,369,863
|
IssuesEvent
|
2015-08-06 16:41:37
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На главном портале заменить хардкод test.region.igov.org.ua на тот сервер, который прописан в подуслуге (aServiceData)
|
In process of testing test
|
найти этот участок можно простым поиском в проекте фразы "test.region.igov.org.ua"
|
1.0
|
На главном портале заменить хардкод test.region.igov.org.ua на тот сервер, который прописан в подуслуге (aServiceData) - найти этот участок можно простым поиском в проекте фразы "test.region.igov.org.ua"
|
process
|
на главном портале заменить хардкод test region igov org ua на тот сервер который прописан в подуслуге aservicedata найти этот участок можно простым поиском в проекте фразы test region igov org ua
| 1
|
362,509
| 25,379,861,820
|
IssuesEvent
|
2022-11-21 16:41:50
|
jfrog/terraform-provider-artifactory
|
https://api.github.com/repos/jfrog/terraform-provider-artifactory
|
closed
|
resource.repo.repositories is not optional
|
documentation
|
**Describe the bug**
When creating artifactory_permission_target, it is not possible to make this permission applicable to Any local/remote/distribution repository, like it is possible through the UI.
Artifactory Version: 7.46.11
Terraform Version: 1.3.4
Artifactory Provider Version: 6.19.2
Code snippet:
````
resource "artifactory_permission_target" "deploy_permission" {
name = "deploy-permission"
repo {
includes_pattern = ["**"]
actions {
groups {
name = "name"
permissions = ["read", "write", "annotate", "managedXrayMeta"]
}
}
}
build {
includes_pattern = ["**"]
actions {
groups {
name = "name"
permissions = ["read", "write", "annotate", "managedXrayMeta"]
}
}
}
}
````
Outcome:
```
╷
│ Error: Missing required argument
│
│ on permissions.tf line 4, in resource "artifactory_permission_target" "deploy_permission":
│ 4: repo {
│
│ The argument "repositories" is required, but no definition was found.
╵
╷
│ Error: Missing required argument
│
│ on permissions.tf line 15, in resource "artifactory_permission_target" "deploy_permission":
│ 15: build {
│
│ The argument "repositories" is required, but no definition was found.
╵
```
**Requirements for and issue**
- [X] A description of the bug
- [X] A fully functioning terraform snippet that can be copy&pasted (no outside files or ENV vars unless that's part of the issue). **If this is not supplied, this issue will likely be closed without any effort expended.**
- [X] Your version of artifactory (you can `curl` it at `$host/artifactory/api/system/version`
- [X] Your version of terraform
- [X] Your version of terraform provider
**Expected behavior**
According to the documentation, one is supposed to be able to omit `repositories` field.
https://github.com/jfrog/terraform-provider-artifactory/blob/master/docs/resources/permission_target.md?plain=1#L67
|
1.0
|
resource.repo.repositories is not optional - **Describe the bug**
When creating artifactory_permission_target, it is not possible to make this permission applicable to Any local/remote/distribution repository, like it is possible through the UI.
Artifactory Version: 7.46.11
Terraform Version: 1.3.4
Artifactory Provider Version: 6.19.2
Code snippet:
````
resource "artifactory_permission_target" "deploy_permission" {
name = "deploy-permission"
repo {
includes_pattern = ["**"]
actions {
groups {
name = "name"
permissions = ["read", "write", "annotate", "managedXrayMeta"]
}
}
}
build {
includes_pattern = ["**"]
actions {
groups {
name = "name"
permissions = ["read", "write", "annotate", "managedXrayMeta"]
}
}
}
}
````
Outcome:
```
╷
│ Error: Missing required argument
│
│ on permissions.tf line 4, in resource "artifactory_permission_target" "deploy_permission":
│ 4: repo {
│
│ The argument "repositories" is required, but no definition was found.
╵
╷
│ Error: Missing required argument
│
│ on permissions.tf line 15, in resource "artifactory_permission_target" "deploy_permission":
│ 15: build {
│
│ The argument "repositories" is required, but no definition was found.
╵
```
**Requirements for and issue**
- [X] A description of the bug
- [X] A fully functioning terraform snippet that can be copy&pasted (no outside files or ENV vars unless that's part of the issue). **If this is not supplied, this issue will likely be closed without any effort expended.**
- [X] Your version of artifactory (you can `curl` it at `$host/artifactory/api/system/version`
- [X] Your version of terraform
- [X] Your version of terraform provider
**Expected behavior**
According to the documentation, one is supposed to be able to omit `repositories` field.
https://github.com/jfrog/terraform-provider-artifactory/blob/master/docs/resources/permission_target.md?plain=1#L67
|
non_process
|
resource repo repositories is not optional describe the bug when creating artifactory permission target it is not possible to make this permission applicable to any local remote distribution repository like it is possible through the ui artifactory version terraform version artifactory provider version code snippet resource artifactory permission target deploy permission name deploy permission repo includes pattern actions groups name name permissions build includes pattern actions groups name name permissions outcome ╷ │ error missing required argument │ │ on permissions tf line in resource artifactory permission target deploy permission │ repo │ │ the argument repositories is required but no definition was found ╵ ╷ │ error missing required argument │ │ on permissions tf line in resource artifactory permission target deploy permission │ build │ │ the argument repositories is required but no definition was found ╵ requirements for and issue a description of the bug a fully functioning terraform snippet that can be copy pasted no outside files or env vars unless that s part of the issue if this is not supplied this issue will likely be closed without any effort expended your version of artifactory you can curl it at host artifactory api system version your version of terraform your version of terraform provider expected behavior according to the documentation one is supposed to be able to omit repositories field
| 0
|
2,110
| 2,603,976,475
|
IssuesEvent
|
2015-02-24 19:01:37
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳龟头长痘痘怎么回事
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳龟头长痘痘怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21
|
1.0
|
沈阳龟头长痘痘怎么回事 - ```
沈阳龟头长痘痘怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21
|
non_process
|
沈阳龟头长痘痘怎么回事 沈阳龟头长痘痘怎么回事〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at
| 0
|
16,867
| 22,147,767,655
|
IssuesEvent
|
2022-06-03 13:46:45
|
pycaret/pycaret
|
https://api.github.com/repos/pycaret/pycaret
|
closed
|
[BUG]: Feature to balance the dataset is not working
|
bug preprocessing priority_high
|
### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [ ] I have confirmed this bug exists on the develop branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@develop).
### Issue Description
Hi guys, I have a question about imbalanced databases. I'm trying to setup balancing tools like:
rus = RandomUnderSampler(random_state=1)
s = setup(
dataframe,
'TARGET',
fold=4,
fix_imbalance= True,
fix_imbalance_method= rus,
feature_selection= True,
use_gpu=True
)
Is there something that I'm doing wrong? When I use the setup like this, I'm having problems with overfitting.
I sent this question on slack and I was directed to open a issue here.
Thanks a lot in advance!!!
### Reproducible Example
```python
import pandas as pd
from imblearn.under_sampling import RandomUnderSampler
from pycaret.classification import *
rus = RandomUnderSampler(random_state=1)
s = setup(
dataframe,
'TARGET',
fold=4,
fix_imbalance= True,
fix_imbalance_method= rus,
feature_selection= True,
use_gpu=True
)
```
### Expected Behavior
After the code execution is expected that the dataset have the same number of rows for the 2 classes and that it was made by undersampling method.
### Actual Results
```python-traceback
In actual results, the same dataset used as data entering is returning in the end of pycaret setup proccess.
```
### Installed Versions
<details>
2.3.10
</details>
|
1.0
|
[BUG]: Feature to balance the dataset is not working - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [ ] I have confirmed this bug exists on the develop branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@develop).
### Issue Description
Hi guys, I have a question about imbalanced databases. I'm trying to setup balancing tools like:
rus = RandomUnderSampler(random_state=1)
s = setup(
dataframe,
'TARGET',
fold=4,
fix_imbalance= True,
fix_imbalance_method= rus,
feature_selection= True,
use_gpu=True
)
Is there something that I'm doing wrong? When I use the setup like this, I'm having problems with overfitting.
I sent this question on slack and I was directed to open a issue here.
Thanks a lot in advance!!!
### Reproducible Example
```python
import pandas as pd
from imblearn.under_sampling import RandomUnderSampler
from pycaret.classification import *
rus = RandomUnderSampler(random_state=1)
s = setup(
dataframe,
'TARGET',
fold=4,
fix_imbalance= True,
fix_imbalance_method= rus,
feature_selection= True,
use_gpu=True
)
```
### Expected Behavior
After the code execution is expected that the dataset have the same number of rows for the 2 classes and that it was made by undersampling method.
### Actual Results
```python-traceback
In actual results, the same dataset used as data entering is returning in the end of pycaret setup proccess.
```
### Installed Versions
<details>
2.3.10
</details>
|
process
|
feature to balance the dataset is not working pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the develop branch of pycaret pip install u git issue description hi guys i have a question about imbalanced databases i m trying to setup balancing tools like rus randomundersampler random state s setup dataframe target fold fix imbalance true fix imbalance method rus feature selection true use gpu true is there something that i m doing wrong when i use the setup like this i m having problems with overfitting i sent this question on slack and i was directed to open a issue here thanks a lot in advance reproducible example python import pandas as pd from imblearn under sampling import randomundersampler from pycaret classification import rus randomundersampler random state s setup dataframe target fold fix imbalance true fix imbalance method rus feature selection true use gpu true expected behavior after the code execution is expected that the dataset have the same number of rows for the classes and that it was made by undersampling method actual results python traceback in actual results the same dataset used as data entering is returning in the end of pycaret setup proccess installed versions
| 1
|
82,447
| 15,934,126,333
|
IssuesEvent
|
2021-04-14 08:18:55
|
YSMull/blog
|
https://api.github.com/repos/YSMull/blog
|
opened
|
有效括号
|
/leetcode/20/ leetcode
|
<div>原文链接: <a href="https://ysmull.cn/leetcode/20/">https://ysmull.cn/leetcode/20/</a></div><br><div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">impl</span> <span class="n">Solution</span> <span class="p">{</span>
<span class="k">pub</span> <span class="k">fn</span> <span class="nf">is_valid</span><span class="p">(</span><span class="n">s</span><span class="p">:</span> <span class="nb">String</span><span class="p">)</span> <span class="k">-></span> <span class="nb">bool</span> <span class="p">{</span>
<span class="k">let</span> <span class="n">is_match</span> <span class="o">=</span> <span class="p">|</span><span class="n">p</span><span class="p">,</span> <span class="n">q</span><span class="p">|</span> <span class="p">{</span>
<span class="n">p</span> <span class="o">==</span> <span class="sc">'('</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">')'</span> <span class="p">||</span> <span class="n">p</span> <span class="o">==</span> <span class="sc">'['</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">']'</span> <span class="p">||</span> <span class="n">p</span> <span class="o">==</span> <span class="sc">'{'</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">'}'</span>
<span class="p">};</span>
<span class="k">let</span> <span class="k">mut</span> <span class="n">stk</span> <span class="o">=</span> <span class="nd">vec!</span><span class="p">[];</span>
<span class="k">for</span> <span class="n">c</span> <span class="n">in</span> <span class="n">s</span><span class="nf">.chars</span><span class="p">()</span> <span class="p">{</span>
<span class="k">match</span> <span class="n">stk</span><span class="nf">.last</span><span class="p">()</span> <span class="p">{</span>
<span class="nf">Some</span><span class="p">(</span><span class="n">last</span><span class="p">)</span> <span class="k">=></span> <span class="p">{</span>
<span class="k">if</span> <span class="nf">is_match</span><span class="p">(</span><span class="o">*</span><span class="n">last</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span> <span class="p">{</span>
<span class="n">stk</span><span class="nf">.pop</span><span class="p">();</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">stk</span><span class="nf">.push</span><span class="p">(</span><span class="n">c</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nb">None</span> <span class="k">=></span> <span class="n">stk</span><span class="nf">.push</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">stk</span><span class="nf">.is_empty</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
|
2.0
|
有效括号 - <div>原文链接: <a href="https://ysmull.cn/leetcode/20/">https://ysmull.cn/leetcode/20/</a></div><br><div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">impl</span> <span class="n">Solution</span> <span class="p">{</span>
<span class="k">pub</span> <span class="k">fn</span> <span class="nf">is_valid</span><span class="p">(</span><span class="n">s</span><span class="p">:</span> <span class="nb">String</span><span class="p">)</span> <span class="k">-></span> <span class="nb">bool</span> <span class="p">{</span>
<span class="k">let</span> <span class="n">is_match</span> <span class="o">=</span> <span class="p">|</span><span class="n">p</span><span class="p">,</span> <span class="n">q</span><span class="p">|</span> <span class="p">{</span>
<span class="n">p</span> <span class="o">==</span> <span class="sc">'('</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">')'</span> <span class="p">||</span> <span class="n">p</span> <span class="o">==</span> <span class="sc">'['</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">']'</span> <span class="p">||</span> <span class="n">p</span> <span class="o">==</span> <span class="sc">'{'</span> <span class="o">&&</span> <span class="n">q</span> <span class="o">==</span> <span class="sc">'}'</span>
<span class="p">};</span>
<span class="k">let</span> <span class="k">mut</span> <span class="n">stk</span> <span class="o">=</span> <span class="nd">vec!</span><span class="p">[];</span>
<span class="k">for</span> <span class="n">c</span> <span class="n">in</span> <span class="n">s</span><span class="nf">.chars</span><span class="p">()</span> <span class="p">{</span>
<span class="k">match</span> <span class="n">stk</span><span class="nf">.last</span><span class="p">()</span> <span class="p">{</span>
<span class="nf">Some</span><span class="p">(</span><span class="n">last</span><span class="p">)</span> <span class="k">=></span> <span class="p">{</span>
<span class="k">if</span> <span class="nf">is_match</span><span class="p">(</span><span class="o">*</span><span class="n">last</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span> <span class="p">{</span>
<span class="n">stk</span><span class="nf">.pop</span><span class="p">();</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">stk</span><span class="nf">.push</span><span class="p">(</span><span class="n">c</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nb">None</span> <span class="k">=></span> <span class="n">stk</span><span class="nf">.push</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">stk</span><span class="nf">.is_empty</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
|
non_process
|
有效括号 原文链接 impl solution pub fn is valid s string gt bool let is match p q p amp amp q p p amp amp q let mut stk vec for c in s chars match stk last some last gt if is match last c stk pop else stk push c none gt stk push c return stk is empty
| 0
|
9,202
| 12,236,594,568
|
IssuesEvent
|
2020-05-04 16:36:02
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Prisma CLI is throwing segmentation fault with node 14
|
bug/2-confirmed kind/bug process/candidate
|
## Bug description
Prisma CLI is throwing Segmentation fault when I try running it with node 14.

I further investigated this and looks like our download script is failing:
```
╰─ npm install -g @prisma/cli ─╯
> @prisma/cli@2.0.0-beta.4 preinstall /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli
> node preinstall/index.js
preinstall { installedGlobally: null } +0ms
/Users/harshit/.nvm/versions/node/v14.1.0/bin/prisma -> /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/build/index.js
/Users/harshit/.nvm/versions/node/v14.1.0/bin/prisma2 -> /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/build/index.js
> @prisma/cli@2.0.0-beta.4 install /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli
> node download-build/index.js
prisma:download TypeError [ERR_INVALID_ARG_TYPE]: The "data" argument must be of type string or an instance of Buffer, TypedArray, or DataView. Received type number (1588600217460)
prisma:download at Object.writeFileSync (fs.js:1380:5)
prisma:download at createLockFile (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:56155)
prisma:download at main (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:55861)
prisma:download at Object.197 (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:56314)
prisma:download at __webpack_require__ (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:154)
prisma:download at startup (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:291)
prisma:download at module.exports.3 (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:346)
prisma:download at Object.<anonymous> (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:356)
prisma:download at Module._compile (internal/modules/cjs/loader.js:1176:30)
prisma:download at Object.Module._extensions..js (internal/modules/cjs/loader.js:1196:10) +0ms
+ @prisma/cli@2.0.0-beta.4
updated 1 package in 0.517s`
```
I narrowed down this error to https://github.com/prisma/prisma/blob/54e8f9e4bb6ad3d9afbc64ca1e47b239de74b463/src/packages/cli/scripts/download.js#L44 call. The API indeed got a change in version 14:

So now node doesn't automatically convert number into string which `Date.now()` returns.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Steps to reproduce the behavior:
1. Install node 14
2. Run `npm install -g @prisma/cli` to install Prisma CLI globally
3. Run `prisma -v`
## Expected behavior
Prisma CLI should not return segmentation fault
<!-- A clear and concise description of what you expected to happen. -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: MacOS 10.15.4
- Database: Postgresql
- Prisma version:
- Node.js version: 14.1.0
|
1.0
|
Prisma CLI is throwing segmentation fault with node 14 - ## Bug description
Prisma CLI is throwing Segmentation fault when I try running it with node 14.

I further investigated this and looks like our download script is failing:
```
╰─ npm install -g @prisma/cli ─╯
> @prisma/cli@2.0.0-beta.4 preinstall /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli
> node preinstall/index.js
preinstall { installedGlobally: null } +0ms
/Users/harshit/.nvm/versions/node/v14.1.0/bin/prisma -> /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/build/index.js
/Users/harshit/.nvm/versions/node/v14.1.0/bin/prisma2 -> /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/build/index.js
> @prisma/cli@2.0.0-beta.4 install /Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli
> node download-build/index.js
prisma:download TypeError [ERR_INVALID_ARG_TYPE]: The "data" argument must be of type string or an instance of Buffer, TypedArray, or DataView. Received type number (1588600217460)
prisma:download at Object.writeFileSync (fs.js:1380:5)
prisma:download at createLockFile (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:56155)
prisma:download at main (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:55861)
prisma:download at Object.197 (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:56314)
prisma:download at __webpack_require__ (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:154)
prisma:download at startup (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:291)
prisma:download at module.exports.3 (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:346)
prisma:download at Object.<anonymous> (/Users/harshit/.nvm/versions/node/v14.1.0/lib/node_modules/@prisma/cli/download-build/index.js:1:356)
prisma:download at Module._compile (internal/modules/cjs/loader.js:1176:30)
prisma:download at Object.Module._extensions..js (internal/modules/cjs/loader.js:1196:10) +0ms
+ @prisma/cli@2.0.0-beta.4
updated 1 package in 0.517s`
```
I narrowed down this error to https://github.com/prisma/prisma/blob/54e8f9e4bb6ad3d9afbc64ca1e47b239de74b463/src/packages/cli/scripts/download.js#L44 call. The API indeed got a change in version 14:

So now node doesn't automatically convert number into string which `Date.now()` returns.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Steps to reproduce the behavior:
1. Install node 14
2. Run `npm install -g @prisma/cli` to install Prisma CLI globally
3. Run `prisma -v`
## Expected behavior
Prisma CLI should not return segmentation fault
<!-- A clear and concise description of what you expected to happen. -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: MacOS 10.15.4
- Database: Postgresql
- Prisma version:
- Node.js version: 14.1.0
|
process
|
prisma cli is throwing segmentation fault with node bug description prisma cli is throwing segmentation fault when i try running it with node i further investigated this and looks like our download script is failing ╰─ npm install g prisma cli ─╯ prisma cli beta preinstall users harshit nvm versions node lib node modules prisma cli node preinstall index js preinstall installedglobally null users harshit nvm versions node bin prisma users harshit nvm versions node lib node modules prisma cli build index js users harshit nvm versions node bin users harshit nvm versions node lib node modules prisma cli build index js prisma cli beta install users harshit nvm versions node lib node modules prisma cli node download build index js prisma download typeerror the data argument must be of type string or an instance of buffer typedarray or dataview received type number prisma download at object writefilesync fs js prisma download at createlockfile users harshit nvm versions node lib node modules prisma cli download build index js prisma download at main users harshit nvm versions node lib node modules prisma cli download build index js prisma download at object users harshit nvm versions node lib node modules prisma cli download build index js prisma download at webpack require users harshit nvm versions node lib node modules prisma cli download build index js prisma download at startup users harshit nvm versions node lib node modules prisma cli download build index js prisma download at module exports users harshit nvm versions node lib node modules prisma cli download build index js prisma download at object users harshit nvm versions node lib node modules prisma cli download build index js prisma download at module compile internal modules cjs loader js prisma download at object module extensions js internal modules cjs loader js prisma cli beta updated package in i narrowed down this error to call the api indeed got a change in version so now node doesn t automatically convert number into string which date now returns how to reproduce steps to reproduce the behavior install node run npm install g prisma cli to install prisma cli globally run prisma v expected behavior prisma cli should not return segmentation fault environment setup os macos database postgresql prisma version node js version
| 1
|
20,107
| 26,644,455,520
|
IssuesEvent
|
2023-01-25 08:52:19
|
Narikakun-Network/status-page
|
https://api.github.com/repos/Narikakun-Network/status-page
|
closed
|
🛑 Earthquake Process Server is down
|
status earthquake-process-server
|
In [`3ffcb58`](https://github.com/Narikakun-Network/status-page/commit/3ffcb58d9b6cae16a43e1ef1d97f474dcc162091
), Earthquake Process Server ($EARTHQUAKE_SERVER) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Earthquake Process Server is down - In [`3ffcb58`](https://github.com/Narikakun-Network/status-page/commit/3ffcb58d9b6cae16a43e1ef1d97f474dcc162091
), Earthquake Process Server ($EARTHQUAKE_SERVER) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
process
|
🛑 earthquake process server is down in earthquake process server earthquake server was down http code response time ms
| 1
|
16,254
| 20,813,516,130
|
IssuesEvent
|
2022-03-18 07:27:36
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Split with lines - unexpected behaviour
|
Processing Bug
|
**Describe the bug**
Split with Lines: when lines from input layer intersect lines from split layer at the exact same point, the operation is not performed.
**How to Reproduce**
1. Take [sample_data.zip](https://github.com/qgis/QGIS/files/4450964/sample_data.zip)
1. Add input_features.shp and split_feature.shp. input_features contains 3 lines, two of them intersects each other at the exact same point they intersec the split_feature (FID= 0 and 2)
2. Click on Processing Toolbox > Vector Overlay > Split with lines
3. See unexpected behaviour --> input_features FID= 0 and 2 are not splitted by split_feature.
**QGIS and OS versions**
OS: Arch Linux x86_64
Kernel: 5.6.2-arch1-2
DE: GNOME (Wayland)
QGIS version | 3.12.0-București | QGIS code branch | Release QGIS code branch.3
Compiled against Qt | 5.14.1 | Running against Qt | 5.14.2
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.2 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.4
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Arch Linux
Active python plugins | QNEAT3; networks; SentinelHub; timeseriesviewerplugin; db_manager; MetaSearch; processing
|
1.0
|
Split with lines - unexpected behaviour - **Describe the bug**
Split with Lines: when lines from input layer intersect lines from split layer at the exact same point, the operation is not performed.
**How to Reproduce**
1. Take [sample_data.zip](https://github.com/qgis/QGIS/files/4450964/sample_data.zip)
1. Add input_features.shp and split_feature.shp. input_features contains 3 lines, two of them intersects each other at the exact same point they intersec the split_feature (FID= 0 and 2)
2. Click on Processing Toolbox > Vector Overlay > Split with lines
3. See unexpected behaviour --> input_features FID= 0 and 2 are not splitted by split_feature.
**QGIS and OS versions**
OS: Arch Linux x86_64
Kernel: 5.6.2-arch1-2
DE: GNOME (Wayland)
QGIS version | 3.12.0-București | QGIS code branch | Release QGIS code branch.3
Compiled against Qt | 5.14.1 | Running against Qt | 5.14.2
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.2 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.4
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Arch Linux
Active python plugins | QNEAT3; networks; SentinelHub; timeseriesviewerplugin; db_manager; MetaSearch; processing
|
process
|
split with lines unexpected behaviour describe the bug split with lines when lines from input layer intersect lines from split layer at the exact same point the operation is not performed how to reproduce take add input features shp and split feature shp input features contains lines two of them intersects each other at the exact same point they intersec the split feature fid and click on processing toolbox vector overlay split with lines see unexpected behaviour input features fid and are not splitted by split feature qgis and os versions os arch linux kernel de gnome wayland qgis version bucurești qgis code branch release qgis code branch compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel february os version arch linux active python plugins networks sentinelhub timeseriesviewerplugin db manager metasearch processing
| 1
|
662,146
| 22,102,916,999
|
IssuesEvent
|
2022-06-01 14:56:03
|
Apicurio/apicurio-registry
|
https://api.github.com/repos/Apicurio/apicurio-registry
|
closed
|
Attach built JARs and Libraries to the release
|
Task component/registry priority/normal
|
From discussion here:
* https://github.com/Apicurio/apicurio-registry/discussions/2519
@riprasad what do you think about updating our release workflow to attaching the various application variants (registry and tenant manager both) to the release as Assets?
|
1.0
|
Attach built JARs and Libraries to the release - From discussion here:
* https://github.com/Apicurio/apicurio-registry/discussions/2519
@riprasad what do you think about updating our release workflow to attaching the various application variants (registry and tenant manager both) to the release as Assets?
|
non_process
|
attach built jars and libraries to the release from discussion here riprasad what do you think about updating our release workflow to attaching the various application variants registry and tenant manager both to the release as assets
| 0
|
60,313
| 6,685,747,625
|
IssuesEvent
|
2017-10-07 00:25:05
|
NLog/NLog
|
https://api.github.com/repos/NLog/NLog
|
closed
|
Unstable test SimpleConcurrentTest
|
file-target unstable test
|
This test seems to be unstable
NLog.UnitTests.Targets.ConcurrentFileTargetTests.SimpleConcurrentTest(numProcesses: 2, numLogs: 500, mode: "none|mutex|archive") [FAIL]
- https://ci.appveyor.com/project/nlog/nlog/build/4.4.5589
|
1.0
|
Unstable test SimpleConcurrentTest - This test seems to be unstable
NLog.UnitTests.Targets.ConcurrentFileTargetTests.SimpleConcurrentTest(numProcesses: 2, numLogs: 500, mode: "none|mutex|archive") [FAIL]
- https://ci.appveyor.com/project/nlog/nlog/build/4.4.5589
|
non_process
|
unstable test simpleconcurrenttest this test seems to be unstable nlog unittests targets concurrentfiletargettests simpleconcurrenttest numprocesses numlogs mode none mutex archive
| 0
|
846
| 3,315,268,906
|
IssuesEvent
|
2015-11-06 11:02:29
|
superroma/testcafe-hammerhead
|
https://api.github.com/repos/superroma/testcafe-hammerhead
|
opened
|
Create mini version of the Parse5
|
AREA: client SYSTEM: resource processing TYPE: enhancement
|
We want to use the parse5 on the client-side. This is necessary to fix #240 bug.
|
1.0
|
Create mini version of the Parse5 - We want to use the parse5 on the client-side. This is necessary to fix #240 bug.
|
process
|
create mini version of the we want to use the on the client side this is necessary to fix bug
| 1
|
47,377
| 24,976,112,774
|
IssuesEvent
|
2022-11-02 07:57:57
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
slow SQL query about CLUSTER_SLOW_QUERY
|
type/question type/performance sig/planner
|
## Performance Questions
high performance tidb lesson 03
more infomation: https://github.com/yufan022/High-Performance-TiDB-Homework/tree/master/lesson03
tidb slow query
- What version of TiDB are you using?
v4.0.0
<!-- You can try `tidb-server -V` or run `select tidb_version();` on TiDB to get this information -->
- What's the observed and your expected performance respectively?
When execute TPC-C benchmark, found some slow SQL query, it's about querying `INFORMATION_SCHEMA`.`CLUSTER_SLOW_QUERY`.
- Have you compared TiDB with other databases? If yes, what's their difference?
no
- For a specific slow SQL query, please provide the following information:
- Whether you analyzed the tables involved in the query and how long it is after you ran the last `ANALYZE`.
not use analyze.
- Whether this SQL query always or occasionally runs slowly.
always slow, It has nothing to do with itTPC-C benchmark.
- The `EXPLAIN ANALYZE` result of this query if your TiDB version is higher than 2.1, or you can just provide the `EXPLAIN` result.

- The plain text of the SQL query and table schema so we can test it locally. It would be better if you can provide the dumped statistics information.
<!-- you can use `show create table ${involved_table}\G` to get the table schema.-->
<!-- use `curl -G "http://${tidb-server-ip}:${tidb-server-status-port}/stats/dump/${db_name}/${table_name}" > ${table_name}_stats.json` to get the dumped statistics of one involved table.-->
```
SELECT
*,
(unix_timestamp(Time) + 0E0) AS timestamp
FROM
`INFORMATION_SCHEMA`.`CLUSTER_SLOW_QUERY`
WHERE
(
time BETWEEN from_unixtime(1598774400)
AND from_unixtime(1598776200)
)
AND (DB IN ("tpcc"))
AND (Plan_digest IN (""))
AND (Digest = "678df037c4a37e0059fdcfd318aa40727dd327f5e316c68c28ca19fa59a0a22d")
ORDER BY
Time DESC
LIMIT
100;
```
- The `EXPLAIN` result of the compared database. For MySQL, `EXPLAIN format=json`'s result will be more helpful.
- Other information that is useful from your perspective.
It has nothing to do with itTPC-C benchmark. When I'm not running benchmark It still happen.
- For a general performance question, e.g. the benchmark result you got by yourself is not expected, please provide the following information:
`./gotpc tpcc -H '172.16.0.79' -P 4000 -D tpcc --warehouses 100 run --time 1m --threads 512`
| rule | CPU | memory | disk |
| --- | --- | --- | --- |
| workload | 8C | 32G | ESSD PL0200G |
| TiDB | 8C | 32G | ESSD PL0 100G |
| PD | 2C | 4G | ESSD PL0 100G |
| TiKV | 8C | 32G | ESSD PL0 200G |
| Monitor | 2C | 4G | ESSD PL0 500G |


|
True
|
slow SQL query about CLUSTER_SLOW_QUERY - ## Performance Questions
high performance tidb lesson 03
more infomation: https://github.com/yufan022/High-Performance-TiDB-Homework/tree/master/lesson03
tidb slow query
- What version of TiDB are you using?
v4.0.0
<!-- You can try `tidb-server -V` or run `select tidb_version();` on TiDB to get this information -->
- What's the observed and your expected performance respectively?
When execute TPC-C benchmark, found some slow SQL query, it's about querying `INFORMATION_SCHEMA`.`CLUSTER_SLOW_QUERY`.
- Have you compared TiDB with other databases? If yes, what's their difference?
no
- For a specific slow SQL query, please provide the following information:
- Whether you analyzed the tables involved in the query and how long it is after you ran the last `ANALYZE`.
not use analyze.
- Whether this SQL query always or occasionally runs slowly.
always slow, It has nothing to do with itTPC-C benchmark.
- The `EXPLAIN ANALYZE` result of this query if your TiDB version is higher than 2.1, or you can just provide the `EXPLAIN` result.

- The plain text of the SQL query and table schema so we can test it locally. It would be better if you can provide the dumped statistics information.
<!-- you can use `show create table ${involved_table}\G` to get the table schema.-->
<!-- use `curl -G "http://${tidb-server-ip}:${tidb-server-status-port}/stats/dump/${db_name}/${table_name}" > ${table_name}_stats.json` to get the dumped statistics of one involved table.-->
```
SELECT
*,
(unix_timestamp(Time) + 0E0) AS timestamp
FROM
`INFORMATION_SCHEMA`.`CLUSTER_SLOW_QUERY`
WHERE
(
time BETWEEN from_unixtime(1598774400)
AND from_unixtime(1598776200)
)
AND (DB IN ("tpcc"))
AND (Plan_digest IN (""))
AND (Digest = "678df037c4a37e0059fdcfd318aa40727dd327f5e316c68c28ca19fa59a0a22d")
ORDER BY
Time DESC
LIMIT
100;
```
- The `EXPLAIN` result of the compared database. For MySQL, `EXPLAIN format=json`'s result will be more helpful.
- Other information that is useful from your perspective.
It has nothing to do with itTPC-C benchmark. When I'm not running benchmark It still happen.
- For a general performance question, e.g. the benchmark result you got by yourself is not expected, please provide the following information:
`./gotpc tpcc -H '172.16.0.79' -P 4000 -D tpcc --warehouses 100 run --time 1m --threads 512`
| rule | CPU | memory | disk |
| --- | --- | --- | --- |
| workload | 8C | 32G | ESSD PL0200G |
| TiDB | 8C | 32G | ESSD PL0 100G |
| PD | 2C | 4G | ESSD PL0 100G |
| TiKV | 8C | 32G | ESSD PL0 200G |
| Monitor | 2C | 4G | ESSD PL0 500G |


|
non_process
|
slow sql query about cluster slow query performance questions high performance tidb lesson more infomation tidb slow query what version of tidb are you using what s the observed and your expected performance respectively when execute tpc c benchmark found some slow sql query it s about querying information schema cluster slow query have you compared tidb with other databases if yes what s their difference no for a specific slow sql query please provide the following information whether you analyzed the tables involved in the query and how long it is after you ran the last analyze not use analyze whether this sql query always or occasionally runs slowly always slow it has nothing to do with ittpc c benchmark the explain analyze result of this query if your tidb version is higher than or you can just provide the explain result the plain text of the sql query and table schema so we can test it locally it would be better if you can provide the dumped statistics information table name stats json to get the dumped statistics of one involved table select unix timestamp time as timestamp from information schema cluster slow query where time between from unixtime and from unixtime and db in tpcc and plan digest in and digest order by time desc limit the explain result of the compared database for mysql explain format json s result will be more helpful other information that is useful from your perspective it has nothing to do with ittpc c benchmark when i m not running benchmark it still happen for a general performance question e g the benchmark result you got by yourself is not expected please provide the following information gotpc tpcc h p d tpcc warehouses run time threads rule cpu memory disk workload essd tidb essd pd essd tikv essd monitor essd
| 0
|
6,183
| 9,100,687,753
|
IssuesEvent
|
2019-02-20 09:13:59
|
plazi/arcadia-project
|
https://api.github.com/repos/plazi/arcadia-project
|
opened
|
QC: writing guidlines for QC, including fixing basic errors
|
Article processing
|
define what QC errors should be fixed by local authoritiy (Pensoft) and which need specialists attention
write guidelines
|
1.0
|
QC: writing guidlines for QC, including fixing basic errors - define what QC errors should be fixed by local authoritiy (Pensoft) and which need specialists attention
write guidelines
|
process
|
qc writing guidlines for qc including fixing basic errors define what qc errors should be fixed by local authoritiy pensoft and which need specialists attention write guidelines
| 1
|
22,034
| 30,550,122,722
|
IssuesEvent
|
2023-07-20 07:57:06
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
closed
|
Limit scope of message handlers by only running registered handlers linked to message pump
|
area:message-processing
|
**Is your feature request related to a problem? Please describe.**
Currently, message pumps are using message handlers registered from other pumps when processing messages due to the fact that all the instances are registered in the same application services container.
**Describe the solution you'd like**
We have recently added a `JobId` property to the message handler collection type that registers the message handlers. This ID can be used to within the message handler registration so that when it is retrieved from a wrong message pump, it will respond with a 'can not process this message' result.
**Additional context**
Introduction of `JobId` in message handler collection: #333
|
1.0
|
Limit scope of message handlers by only running registered handlers linked to message pump - **Is your feature request related to a problem? Please describe.**
Currently, message pumps are using message handlers registered from other pumps when processing messages due to the fact that all the instances are registered in the same application services container.
**Describe the solution you'd like**
We have recently added a `JobId` property to the message handler collection type that registers the message handlers. This ID can be used to within the message handler registration so that when it is retrieved from a wrong message pump, it will respond with a 'can not process this message' result.
**Additional context**
Introduction of `JobId` in message handler collection: #333
|
process
|
limit scope of message handlers by only running registered handlers linked to message pump is your feature request related to a problem please describe currently message pumps are using message handlers registered from other pumps when processing messages due to the fact that all the instances are registered in the same application services container describe the solution you d like we have recently added a jobid property to the message handler collection type that registers the message handlers this id can be used to within the message handler registration so that when it is retrieved from a wrong message pump it will respond with a can not process this message result additional context introduction of jobid in message handler collection
| 1
|
95,278
| 3,941,479,795
|
IssuesEvent
|
2016-04-27 07:56:01
|
BugBusterSWE/documentation
|
https://api.github.com/repos/BugBusterSWE/documentation
|
opened
|
Aggiungere descrizione UC-U3.1 e altri mancanti
|
Analist priority:medium
|
Documento in cui si trova il problema:
Activity #302
Analisi dei Requisiti
Descrizione del problema:
Inserire descrizione del caso d'uso UC-U3.1 e degli altri mancanti di descrizione
Link task: [https://bugbusters.teamwork.com/tasks/6461064](https://bugbusters.teamwork.com/tasks/6461064)
|
1.0
|
Aggiungere descrizione UC-U3.1 e altri mancanti - Documento in cui si trova il problema:
Activity #302
Analisi dei Requisiti
Descrizione del problema:
Inserire descrizione del caso d'uso UC-U3.1 e degli altri mancanti di descrizione
Link task: [https://bugbusters.teamwork.com/tasks/6461064](https://bugbusters.teamwork.com/tasks/6461064)
|
non_process
|
aggiungere descrizione uc e altri mancanti documento in cui si trova il problema activity analisi dei requisiti descrizione del problema inserire descrizione del caso d uso uc e degli altri mancanti di descrizione link task
| 0
|
240,190
| 26,254,332,441
|
IssuesEvent
|
2023-01-05 22:33:26
|
jtimberlake/rei-cedar
|
https://api.github.com/repos/jtimberlake/rei-cedar
|
reopened
|
CVE-2021-37713 (High) detected in tar-6.1.0.tgz
|
security vulnerability
|
## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-6.0.0.tgz (Root Library)
- node-gyp-7.1.2.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/9c0c2cadda2965ff0d2cb956635474ae9161ddfe">9c0c2cadda2965ff0d2cb956635474ae9161ddfe</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 6.1.9</p>
<p>Direct dependency fix Resolution (node-sass): 6.0.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2021-37713 (High) detected in tar-6.1.0.tgz - ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-6.0.0.tgz (Root Library)
- node-gyp-7.1.2.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/9c0c2cadda2965ff0d2cb956635474ae9161ddfe">9c0c2cadda2965ff0d2cb956635474ae9161ddfe</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 6.1.9</p>
<p>Direct dependency fix Resolution (node-sass): 6.0.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file package json path to vulnerable library node modules tar package json dependency hierarchy node sass tgz root library node gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch next vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted this is in part accomplished by sanitizing absolute paths of entries within the archive skipping archive entries that contain path portions and resolving the sanitized paths against the extraction target directory this logic was insufficient on windows systems when extracting tar files that contained a path that was not an absolute path but specified a drive letter different from the extraction target such as c some path if the drive letter does not match the extraction target for example d extraction dir then the result of path resolve extractiondirectory entrypath would resolve against the current working directory on the c drive rather than the extraction target directory additionally a portion of the path could occur immediately after the drive letter such as c foo and was not properly sanitized by the logic that checked for within the normalized and split portions of the path this only affects users of node tar on windows systems these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar there is no reasonable way to work around this issue without performing the same path normalization procedures that node tar now does users are encouraged to upgrade to the latest patched versions of node tar rather than attempt to sanitize paths themselves publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution node sass rescue worker helmet automatic remediation is available for this issue
| 0
|
532,427
| 15,556,375,080
|
IssuesEvent
|
2021-03-16 07:42:57
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
"Manage note type" is uselessly slow
|
Good First Issue! Keep Open Performance Priority-Low
|
**Is your feature request related to a problem? Please describe.**
"Manage note type" is extremely slow. That should not be the case given that it only show a list of note type.
**Describe the solution you'd like**
That it loads immediately, even if with less data (e.g. loading without the number of note of each type is still useful)
(I'd expect to give it as a task for someone to show how background process work)
|
1.0
|
"Manage note type" is uselessly slow - **Is your feature request related to a problem? Please describe.**
"Manage note type" is extremely slow. That should not be the case given that it only show a list of note type.
**Describe the solution you'd like**
That it loads immediately, even if with less data (e.g. loading without the number of note of each type is still useful)
(I'd expect to give it as a task for someone to show how background process work)
|
non_process
|
manage note type is uselessly slow is your feature request related to a problem please describe manage note type is extremely slow that should not be the case given that it only show a list of note type describe the solution you d like that it loads immediately even if with less data e g loading without the number of note of each type is still useful i d expect to give it as a task for someone to show how background process work
| 0
|
15,136
| 18,890,187,485
|
IssuesEvent
|
2021-11-15 12:21:50
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
prisma validate error - thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value'
|
bug/2-confirmed kind/regression process/candidate topic: cli-validate team/migrations topic: rust panic 3.4.0
|
### Bug description
I have a self-referencing table model in my schema that looks like this:
```prisma
model Folder {
id String @db.Char(30) @default(cuid()) @id
name String @db.MediumText
childFolders Folder[] @relation("ChildToParentFolder")
parentFolderId String? @db.Char(30)
parentFolder Folder? @relation("ChildToParentFolder", fields: [parentFolderId], references: [id])
}
```
But when I run `DEBUG="*" npx prisma validate` I see
```
prisma:loadEnv project root found at [...path omited...]\package.json +0ms
prisma:tryLoadEnv Environment variables loaded from [...path omited...]\.env +0ms
[dotenv][DEBUG] did not match key and value when parsing line 1: # Environment variables declared in this file are automatically made available to Prisma.
[dotenv][DEBUG] did not match key and value when parsing line 2: # See the documentation for more detail: https://pris.ly/d/prisma-schema#using-environment-variables[dotenv][DEBUG] did not match key and value when parsing line 3:
[dotenv][DEBUG] did not match key and value when parsing line 4: # Prisma supports the native connection string format for PostgreSQL, MySQL, SQLite, SQL Server and
MongoDB (Preview).
[dotenv][DEBUG] did not match key and value when parsing line 5: # See the documentation for all the connection string options: https://pris.ly/d/connection-strings
[dotenv][DEBUG] did not match key and value when parsing line 6:
[dotenv][DEBUG] did not match key and value when parsing line 12:
Environment variables loaded from .env
prisma:engines binaries to download libquery-engine, migration-engine, introspection-engine, prisma-fmt +0ms
Prisma schema loaded from prisma\schema.prisma
prisma:getDMMF Using CLI Query Engine (Node-API) at: [...path omited...]\node_modules\@prisma\engines\query_engine-windows.dll.node +0ms
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libs\datamodel\connectors\dml\src\model.rs:290:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error: SyntaxError: Unexpected token c in JSON at position 0
at JSON.parse (<anonymous>)
at getDmmfNodeAPI ([...path omited...]\node_modules\prisma\build\index.js:36492:28)
at async getDMMF ([...path omited...]\node_modules\prisma\build\index.js:36471:17)
at async Object.parse ([...path omited...]\node_modules\prisma\build\index.js:104072:5)
at async main ([...path omited...]\node_modules\prisma\build\index.js:105393:18)
```
I tried `RUST_BACKTRACE=1` to get more info but that spit out the same thing except with the suggestion to try `RUST_BACKTRACE=full`, so I tried that and get this:
```
$ RUST_BACKTRACE=full npx prisma validate
Environment variables loaded from .env
Prisma schema loaded from prisma\schema.prisma
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libs\datamodel\connectors\dml\src\model.rs:290:47
stack backtrace:
0: 0x7ffb3d4f042e - napi_register_module_v1
1: 0x7ffb3d50f5ba - rust_eh_personality
2: 0x7ffb3d4e8d78 - napi_register_module_v1
3: 0x7ffb3d4f3146 - napi_register_module_v1
4: 0x7ffb3d4f2c34 - napi_register_module_v1
5: 0x7ffb3d4f37a5 - napi_register_module_v1
6: 0x7ffb3d4f335f - napi_register_module_v1
7: 0x7ffb3d4f0d77 - napi_register_module_v1
8: 0x7ffb3d4f32e9 - napi_register_module_v1
9: 0x7ffb3d6a00d0 - rust_eh_personality
10: 0x7ffb3d6a001c - rust_eh_personality
11: 0x7ffb3d019a5f - napi_register_module_v1
12: 0x7ffb3ce5a436 - napi_register_module_v1
13: 0x7ffb3cebef74 - napi_register_module_v1
14: 0x7ffb3ce5826f - napi_register_module_v1
15: 0x7ffb3c22644d - <unknown>
16: 0x7ffb3c23d010 - <unknown>
17: 0x7ffb3c25514b - <unknown>
18: 0x7ff74de2aac7 - node::Stop
19: 0x7ff74e67f11f - v8::internal::Builtins::builtin_handle
20: 0x7ff74e67e6b4 - v8::internal::Builtins::builtin_handle
21: 0x7ff74e67e9a8 - v8::internal::Builtins::builtin_handle
22: 0x7ff74e67e7f3 - v8::internal::Builtins::builtin_handle
23: 0x7ff74e75d67d - v8::internal::SetupIsolateDelegate::SetupHeap
24: 0x7ff74e6f3762 - v8::internal::SetupIsolateDelegate::SetupHeap
25: 0x7ff74e721810 - v8::internal::SetupIsolateDelegate::SetupHeap
26: 0x7ff74e79fc6e - v8::internal::SetupIsolateDelegate::SetupHeap
27: 0x7ff74e713a10 - v8::internal::SetupIsolateDelegate::SetupHeap
28: 0x7ff74e6f130c - v8::internal::SetupIsolateDelegate::SetupHeap
29: 0x7ff74e5c0b10 - v8::internal::Execution::CallWasm
30: 0x7ff74e5c0c1b - v8::internal::Execution::CallWasm
31: 0x7ff74e5c165a - v8::internal::Execution::TryCall
32: 0x7ff74e5a1735 - v8::internal::MicrotaskQueue::RunMicrotasks
33: 0x7ff74e5a1490 - v8::internal::MicrotaskQueue::PerformCheckpoint
34: 0x7ff74de80904 - node::CallbackScope::~CallbackScope
35: 0x7ff74de80d3b - node::CallbackScope::~CallbackScope
36: 0x7ff74de789c4 - v8::internal::compiler::Operator::EffectOutputCount
37: 0x7ff74dde5eb5 - SSL_get_quiet_shutdown
38: 0x7ff74ddd933e - v8::base::CPU::has_sse
39: 0x7ff74deb4e47 - uv_timer_stop
40: 0x7ff74deb141b - uv_async_send
41: 0x7ff74deb0bac - uv_loop_init
42: 0x7ff74deb0d4a - uv_run
43: 0x7ff74dda8a45 - v8::internal::AsmJsScanner::GetIdentifierString
44: 0x7ff74de21227 - node::Start
45: 0x7ff74dc7685c - RC4_options
46: 0x7ff74ec31c08 - v8::internal::compiler::RepresentationChanger::Uint32OverflowOperatorFor
47: 0x7ffb97cc7034 - BaseThreadInitThunk
48: 0x7ffb97e02651 - RtlUserThreadStart
Error: Unexpected token c in JSON at position 0
```
### How to reproduce
1. Make self-referencing model in Prisma2 schema as shown above
2. Run `npx prisma validate`
3. become saddened by error
### Expected behavior
Ideally, validation would check the file
### Environment & setup
- OS: Windows (above from within Git Bash terminal in VS Code)
- Database: MySQL
- Node.js version: v14.18.0
### Prisma Version
```
prisma : 3.4.0
@prisma/client : 3.4.0
Current platform : windows
Query Engine (Node-API) : libquery-engine 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85
Studio : 0.438.0
```
|
1.0
|
prisma validate error - thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value' - ### Bug description
I have a self-referencing table model in my schema that looks like this:
```prisma
model Folder {
id String @db.Char(30) @default(cuid()) @id
name String @db.MediumText
childFolders Folder[] @relation("ChildToParentFolder")
parentFolderId String? @db.Char(30)
parentFolder Folder? @relation("ChildToParentFolder", fields: [parentFolderId], references: [id])
}
```
But when I run `DEBUG="*" npx prisma validate` I see
```
prisma:loadEnv project root found at [...path omited...]\package.json +0ms
prisma:tryLoadEnv Environment variables loaded from [...path omited...]\.env +0ms
[dotenv][DEBUG] did not match key and value when parsing line 1: # Environment variables declared in this file are automatically made available to Prisma.
[dotenv][DEBUG] did not match key and value when parsing line 2: # See the documentation for more detail: https://pris.ly/d/prisma-schema#using-environment-variables[dotenv][DEBUG] did not match key and value when parsing line 3:
[dotenv][DEBUG] did not match key and value when parsing line 4: # Prisma supports the native connection string format for PostgreSQL, MySQL, SQLite, SQL Server and
MongoDB (Preview).
[dotenv][DEBUG] did not match key and value when parsing line 5: # See the documentation for all the connection string options: https://pris.ly/d/connection-strings
[dotenv][DEBUG] did not match key and value when parsing line 6:
[dotenv][DEBUG] did not match key and value when parsing line 12:
Environment variables loaded from .env
prisma:engines binaries to download libquery-engine, migration-engine, introspection-engine, prisma-fmt +0ms
Prisma schema loaded from prisma\schema.prisma
prisma:getDMMF Using CLI Query Engine (Node-API) at: [...path omited...]\node_modules\@prisma\engines\query_engine-windows.dll.node +0ms
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libs\datamodel\connectors\dml\src\model.rs:290:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error: SyntaxError: Unexpected token c in JSON at position 0
at JSON.parse (<anonymous>)
at getDmmfNodeAPI ([...path omited...]\node_modules\prisma\build\index.js:36492:28)
at async getDMMF ([...path omited...]\node_modules\prisma\build\index.js:36471:17)
at async Object.parse ([...path omited...]\node_modules\prisma\build\index.js:104072:5)
at async main ([...path omited...]\node_modules\prisma\build\index.js:105393:18)
```
I tried `RUST_BACKTRACE=1` to get more info but that spit out the same thing except with the suggestion to try `RUST_BACKTRACE=full`, so I tried that and get this:
```
$ RUST_BACKTRACE=full npx prisma validate
Environment variables loaded from .env
Prisma schema loaded from prisma\schema.prisma
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libs\datamodel\connectors\dml\src\model.rs:290:47
stack backtrace:
0: 0x7ffb3d4f042e - napi_register_module_v1
1: 0x7ffb3d50f5ba - rust_eh_personality
2: 0x7ffb3d4e8d78 - napi_register_module_v1
3: 0x7ffb3d4f3146 - napi_register_module_v1
4: 0x7ffb3d4f2c34 - napi_register_module_v1
5: 0x7ffb3d4f37a5 - napi_register_module_v1
6: 0x7ffb3d4f335f - napi_register_module_v1
7: 0x7ffb3d4f0d77 - napi_register_module_v1
8: 0x7ffb3d4f32e9 - napi_register_module_v1
9: 0x7ffb3d6a00d0 - rust_eh_personality
10: 0x7ffb3d6a001c - rust_eh_personality
11: 0x7ffb3d019a5f - napi_register_module_v1
12: 0x7ffb3ce5a436 - napi_register_module_v1
13: 0x7ffb3cebef74 - napi_register_module_v1
14: 0x7ffb3ce5826f - napi_register_module_v1
15: 0x7ffb3c22644d - <unknown>
16: 0x7ffb3c23d010 - <unknown>
17: 0x7ffb3c25514b - <unknown>
18: 0x7ff74de2aac7 - node::Stop
19: 0x7ff74e67f11f - v8::internal::Builtins::builtin_handle
20: 0x7ff74e67e6b4 - v8::internal::Builtins::builtin_handle
21: 0x7ff74e67e9a8 - v8::internal::Builtins::builtin_handle
22: 0x7ff74e67e7f3 - v8::internal::Builtins::builtin_handle
23: 0x7ff74e75d67d - v8::internal::SetupIsolateDelegate::SetupHeap
24: 0x7ff74e6f3762 - v8::internal::SetupIsolateDelegate::SetupHeap
25: 0x7ff74e721810 - v8::internal::SetupIsolateDelegate::SetupHeap
26: 0x7ff74e79fc6e - v8::internal::SetupIsolateDelegate::SetupHeap
27: 0x7ff74e713a10 - v8::internal::SetupIsolateDelegate::SetupHeap
28: 0x7ff74e6f130c - v8::internal::SetupIsolateDelegate::SetupHeap
29: 0x7ff74e5c0b10 - v8::internal::Execution::CallWasm
30: 0x7ff74e5c0c1b - v8::internal::Execution::CallWasm
31: 0x7ff74e5c165a - v8::internal::Execution::TryCall
32: 0x7ff74e5a1735 - v8::internal::MicrotaskQueue::RunMicrotasks
33: 0x7ff74e5a1490 - v8::internal::MicrotaskQueue::PerformCheckpoint
34: 0x7ff74de80904 - node::CallbackScope::~CallbackScope
35: 0x7ff74de80d3b - node::CallbackScope::~CallbackScope
36: 0x7ff74de789c4 - v8::internal::compiler::Operator::EffectOutputCount
37: 0x7ff74dde5eb5 - SSL_get_quiet_shutdown
38: 0x7ff74ddd933e - v8::base::CPU::has_sse
39: 0x7ff74deb4e47 - uv_timer_stop
40: 0x7ff74deb141b - uv_async_send
41: 0x7ff74deb0bac - uv_loop_init
42: 0x7ff74deb0d4a - uv_run
43: 0x7ff74dda8a45 - v8::internal::AsmJsScanner::GetIdentifierString
44: 0x7ff74de21227 - node::Start
45: 0x7ff74dc7685c - RC4_options
46: 0x7ff74ec31c08 - v8::internal::compiler::RepresentationChanger::Uint32OverflowOperatorFor
47: 0x7ffb97cc7034 - BaseThreadInitThunk
48: 0x7ffb97e02651 - RtlUserThreadStart
Error: Unexpected token c in JSON at position 0
```
### How to reproduce
1. Make self-referencing model in Prisma2 schema as shown above
2. Run `npx prisma validate`
3. become saddened by error
### Expected behavior
Ideally, validation would check the file
### Environment & setup
- OS: Windows (above from within Git Bash terminal in VS Code)
- Database: MySQL
- Node.js version: v14.18.0
### Prisma Version
```
prisma : 3.4.0
@prisma/client : 3.4.0
Current platform : windows
Query Engine (Node-API) : libquery-engine 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85 (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : 1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85
Studio : 0.438.0
```
|
process
|
prisma validate error thread panicked at called option unwrap on a none value bug description i have a self referencing table model in my schema that looks like this prisma model folder id string db char default cuid id name string db mediumtext childfolders folder relation childtoparentfolder parentfolderid string db char parentfolder folder relation childtoparentfolder fields references but when i run debug npx prisma validate i see prisma loadenv project root found at package json prisma tryloadenv environment variables loaded from env did not match key and value when parsing line environment variables declared in this file are automatically made available to prisma did not match key and value when parsing line see the documentation for more detail did not match key and value when parsing line did not match key and value when parsing line prisma supports the native connection string format for postgresql mysql sqlite sql server and mongodb preview did not match key and value when parsing line see the documentation for all the connection string options did not match key and value when parsing line did not match key and value when parsing line environment variables loaded from env prisma engines binaries to download libquery engine migration engine introspection engine prisma fmt prisma schema loaded from prisma schema prisma prisma getdmmf using cli query engine node api at node modules prisma engines query engine windows dll node thread panicked at called option unwrap on a none value libs datamodel connectors dml src model rs note run with rust backtrace environment variable to display a backtrace error syntaxerror unexpected token c in json at position at json parse at getdmmfnodeapi node modules prisma build index js at async getdmmf node modules prisma build index js at async object parse node modules prisma build index js at async main node modules prisma build index js i tried rust backtrace to get more info but that spit out the same thing except with the suggestion to try rust backtrace full so i tried that and get this rust backtrace full npx prisma validate environment variables loaded from env prisma schema loaded from prisma schema prisma thread panicked at called option unwrap on a none value libs datamodel connectors dml src model rs stack backtrace napi register module rust eh personality napi register module napi register module napi register module napi register module napi register module napi register module napi register module rust eh personality rust eh personality napi register module napi register module napi register module napi register module node stop internal builtins builtin handle internal builtins builtin handle internal builtins builtin handle internal builtins builtin handle internal setupisolatedelegate setupheap internal setupisolatedelegate setupheap internal setupisolatedelegate setupheap internal setupisolatedelegate setupheap internal setupisolatedelegate setupheap internal setupisolatedelegate setupheap internal execution callwasm internal execution callwasm internal execution trycall internal microtaskqueue runmicrotasks internal microtaskqueue performcheckpoint node callbackscope callbackscope node callbackscope callbackscope internal compiler operator effectoutputcount ssl get quiet shutdown base cpu has sse uv timer stop uv async send uv loop init uv run internal asmjsscanner getidentifierstring node start options internal compiler representationchanger basethreadinitthunk rtluserthreadstart error unexpected token c in json at position how to reproduce make self referencing model in schema as shown above run npx prisma validate become saddened by error expected behavior ideally validation would check the file environment setup os windows above from within git bash terminal in vs code database mysql node js version prisma version prisma prisma client current platform windows query engine node api libquery engine at node modules prisma engines query engine windows dll node migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe default engines hash studio
| 1
|
11,639
| 14,494,970,717
|
IssuesEvent
|
2020-12-11 10:29:47
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
opened
|
Replace bundled Duration::as_secs_f64 implementation with a real one
|
A-cpu A-process C-enhancement C-good-first-issue
|
This one code block at `heim-process` crate: https://github.com/heim-rs/heim/blob/58d7110f803f94fb186a95a55000edea49ace7bd/heim-process/src/process/cpu_usage.rs#L27-L31 and its copy introduced in #295 (see ` heim-cpu/src/usage.rs`) should be replaced with `Duration::as_secs_f64`.
MSRV was bumped to 1.45 and it is safe to it now.
|
1.0
|
Replace bundled Duration::as_secs_f64 implementation with a real one - This one code block at `heim-process` crate: https://github.com/heim-rs/heim/blob/58d7110f803f94fb186a95a55000edea49ace7bd/heim-process/src/process/cpu_usage.rs#L27-L31 and its copy introduced in #295 (see ` heim-cpu/src/usage.rs`) should be replaced with `Duration::as_secs_f64`.
MSRV was bumped to 1.45 and it is safe to it now.
|
process
|
replace bundled duration as secs implementation with a real one this one code block at heim process crate and its copy introduced in see heim cpu src usage rs should be replaced with duration as secs msrv was bumped to and it is safe to it now
| 1
|
17,597
| 23,424,464,687
|
IssuesEvent
|
2022-08-14 07:08:38
|
Battle-s/battle-school-backend
|
https://api.github.com/repos/Battle-s/battle-school-backend
|
closed
|
[FEAT] 회원 가입 및 로그인 로직 설계 및 작성
|
feature :computer: processing :hourglass_flowing_sand:
|
## 설명
> 이슈에 대한 설명을 작성합니다. 담당자도 함께 작성하면 좋습니다.
## 체크사항
> 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다.
- [x] service
- [x] controller
- [x] swagger test해보기
- [ ] swagger 배포
## 참고자료
> 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다.
## 관련 논의
> 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다.
|
1.0
|
[FEAT] 회원 가입 및 로그인 로직 설계 및 작성 - ## 설명
> 이슈에 대한 설명을 작성합니다. 담당자도 함께 작성하면 좋습니다.
## 체크사항
> 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다.
- [x] service
- [x] controller
- [x] swagger test해보기
- [ ] swagger 배포
## 참고자료
> 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다.
## 관련 논의
> 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다.
|
process
|
회원 가입 및 로그인 로직 설계 및 작성 설명 이슈에 대한 설명을 작성합니다 담당자도 함께 작성하면 좋습니다 체크사항 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다 service controller swagger test해보기 swagger 배포 참고자료 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다 관련 논의 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다
| 1
|
7,902
| 4,102,394,750
|
IssuesEvent
|
2016-06-04 00:50:43
|
jeff1evesque/machine-learning
|
https://api.github.com/repos/jeff1evesque/machine-learning
|
closed
|
Move arguments in 'setup_tables.py' into 'settings.yaml'
|
build enhancement
|
We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.
|
1.0
|
Move arguments in 'setup_tables.py' into 'settings.yaml' - We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.
|
non_process
|
move arguments in setup tables py into settings yaml we will move the arguments used for populating tbl model type into settings yaml then we will respectively reference the yaml attribute within setup tables py
| 0
|
20,506
| 27,167,377,418
|
IssuesEvent
|
2023-02-17 16:21:37
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Section "Use a template parameter as part of a condition" is confusing
|
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
This section is discussing the use of parameters and conditions that use parameters on tasks. This section first discusses their use directly in a pipeline yaml, and then later discusses their use with pipeline templates using the _extends_ and _template:_ syntax.
The way this is explained and laid out is a bit confusing because both the template and the out pipeline yaml are using "true" as the default value. Additionally, it says that:
_As a result, if you set the parameter value in both the template and the pipeline YAML files, the value from the template will get used in your condition._
Is this saying that you can't pass in the paramter from the pipeline (outer) yaml into the template? Is there a work around? This seems like it would be a big deal and huge limitation with _extends template_ functionality.
Additionally, the example includes the condition inside an _and()_ with a _succeedded()_. What is the purpose of this, it isn't explained why this is here and needed vs just a _eq('${{ parameters.doThing }}', 'true')_
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml%2Cstages)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/conditions.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Section "Use a template parameter as part of a condition" is confusing - This section is discussing the use of parameters and conditions that use parameters on tasks. This section first discusses their use directly in a pipeline yaml, and then later discusses their use with pipeline templates using the _extends_ and _template:_ syntax.
The way this is explained and laid out is a bit confusing because both the template and the out pipeline yaml are using "true" as the default value. Additionally, it says that:
_As a result, if you set the parameter value in both the template and the pipeline YAML files, the value from the template will get used in your condition._
Is this saying that you can't pass in the paramter from the pipeline (outer) yaml into the template? Is there a work around? This seems like it would be a big deal and huge limitation with _extends template_ functionality.
Additionally, the example includes the condition inside an _and()_ with a _succeedded()_. What is the purpose of this, it isn't explained why this is here and needed vs just a _eq('${{ parameters.doThing }}', 'true')_
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml%2Cstages)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/conditions.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
section use a template parameter as part of a condition is confusing this section is discussing the use of parameters and conditions that use parameters on tasks this section first discusses their use directly in a pipeline yaml and then later discusses their use with pipeline templates using the extends and template syntax the way this is explained and laid out is a bit confusing because both the template and the out pipeline yaml are using true as the default value additionally it says that as a result if you set the parameter value in both the template and the pipeline yaml files the value from the template will get used in your condition is this saying that you can t pass in the paramter from the pipeline outer yaml into the template is there a work around this seems like it would be a big deal and huge limitation with extends template functionality additionally the example includes the condition inside an and with a succeedded what is the purpose of this it isn t explained why this is here and needed vs just a eq parameters dothing true document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id eaae version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
22,000
| 30,501,433,116
|
IssuesEvent
|
2023-07-18 14:11:15
|
kulturpass-de/kulturpass-app
|
https://api.github.com/repos/kulturpass-de/kulturpass-app
|
closed
|
Identifizierung auch bei Appversion 1.0.1 nicht möglich
|
In Process
|
Der Identifizierungsprozess ist nach dem Appupdate auf Version 1.0.1 auf Android immernoch nicht möglich. Die App kommt zwar nun zwar 2 Bildschirme weiter (Welche Infos werden gelesen?, ...), aber der Ausweis wird mit gleichem Fehler zwar erkannt (Vibration beim dranhalten) aber halt nicht hinzufügt (AA2_TIMEOUT).
|
1.0
|
Identifizierung auch bei Appversion 1.0.1 nicht möglich - Der Identifizierungsprozess ist nach dem Appupdate auf Version 1.0.1 auf Android immernoch nicht möglich. Die App kommt zwar nun zwar 2 Bildschirme weiter (Welche Infos werden gelesen?, ...), aber der Ausweis wird mit gleichem Fehler zwar erkannt (Vibration beim dranhalten) aber halt nicht hinzufügt (AA2_TIMEOUT).
|
process
|
identifizierung auch bei appversion nicht möglich der identifizierungsprozess ist nach dem appupdate auf version auf android immernoch nicht möglich die app kommt zwar nun zwar bildschirme weiter welche infos werden gelesen aber der ausweis wird mit gleichem fehler zwar erkannt vibration beim dranhalten aber halt nicht hinzufügt timeout
| 1
|
45,404
| 7,181,126,735
|
IssuesEvent
|
2018-02-01 02:58:51
|
dart-lang/angular
|
https://api.github.com/repos/dart-lang/angular
|
closed
|
Remove or rewrite `doc/compiler_flags.md`
|
documentation
|
References to `pub build` and `transformers`. So quaint...
|
1.0
|
Remove or rewrite `doc/compiler_flags.md` - References to `pub build` and `transformers`. So quaint...
|
non_process
|
remove or rewrite doc compiler flags md references to pub build and transformers so quaint
| 0
|
6,808
| 9,955,250,815
|
IssuesEvent
|
2019-07-05 10:29:46
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
RFC: Avro support
|
help wanted processors
|
Hey all, I'm not currently using Avro at all but I figured it would be useful to start adding support. I've implemented an experimental processor that does some very basic conversions to/from JSON based on a schema and encoding.
It would be very useful if users of Avro could let me know if this solves any real problems you might have and what else you feel is needed.
|
1.0
|
RFC: Avro support - Hey all, I'm not currently using Avro at all but I figured it would be useful to start adding support. I've implemented an experimental processor that does some very basic conversions to/from JSON based on a schema and encoding.
It would be very useful if users of Avro could let me know if this solves any real problems you might have and what else you feel is needed.
|
process
|
rfc avro support hey all i m not currently using avro at all but i figured it would be useful to start adding support i ve implemented an experimental processor that does some very basic conversions to from json based on a schema and encoding it would be very useful if users of avro could let me know if this solves any real problems you might have and what else you feel is needed
| 1
|
166,168
| 12,893,562,330
|
IssuesEvent
|
2020-07-13 21:55:22
|
opendistro-for-elasticsearch/index-management
|
https://api.github.com/repos/opendistro-for-elasticsearch/index-management
|
closed
|
Add codecov config to control coverage requirements
|
enhancement testing
|
Our PRs right now sometimes fail the codecov status check because it drops by a trivial amount (usually from randomization in the tests). Should fix by adding our own codecov config to set thresholds and coverage minimums.
|
1.0
|
Add codecov config to control coverage requirements - Our PRs right now sometimes fail the codecov status check because it drops by a trivial amount (usually from randomization in the tests). Should fix by adding our own codecov config to set thresholds and coverage minimums.
|
non_process
|
add codecov config to control coverage requirements our prs right now sometimes fail the codecov status check because it drops by a trivial amount usually from randomization in the tests should fix by adding our own codecov config to set thresholds and coverage minimums
| 0
|
17,065
| 22,501,861,378
|
IssuesEvent
|
2022-06-23 12:33:02
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
Remove deprecated Go commands
|
kind/toil team/process-automation
|
**Description**
With the update to Go 1.17 the `go get` command is deprecated. https://go.dev/doc/go-get-install-deprecation#:~:text=Starting%20in%20Go%201.17%2C%20installing,the%20%2Dd%20flag%20were%20enabled.
**Details**
Starting with 1.17 go get ... is deprecated (but still seems to work).
It should be replaced with go install ... Go install can either take a version identifier or nothing.
My understanding is that when it takes a version identifier, it ignores the current go.mod file and installs it "globally". When it takes no version number, it picks up version number from go.mod file and installs it locally.
We use go get to install the following:
* `go-bindata` Used to read in the version. Also, there is a comment to use go:embed when updating the version number. Not sure what this is about, either
* `gocompat` afaik this is a tool to re-generate the .gocompat file after changes to the interfaces
* `protoc-gen-go` used to generate go sources from proto file
All are build time commands.
`go-bindata` can most likely be replaced by https://pkg.go.dev/embed
|
1.0
|
Remove deprecated Go commands - **Description**
With the update to Go 1.17 the `go get` command is deprecated. https://go.dev/doc/go-get-install-deprecation#:~:text=Starting%20in%20Go%201.17%2C%20installing,the%20%2Dd%20flag%20were%20enabled.
**Details**
Starting with 1.17 go get ... is deprecated (but still seems to work).
It should be replaced with go install ... Go install can either take a version identifier or nothing.
My understanding is that when it takes a version identifier, it ignores the current go.mod file and installs it "globally". When it takes no version number, it picks up version number from go.mod file and installs it locally.
We use go get to install the following:
* `go-bindata` Used to read in the version. Also, there is a comment to use go:embed when updating the version number. Not sure what this is about, either
* `gocompat` afaik this is a tool to re-generate the .gocompat file after changes to the interfaces
* `protoc-gen-go` used to generate go sources from proto file
All are build time commands.
`go-bindata` can most likely be replaced by https://pkg.go.dev/embed
|
process
|
remove deprecated go commands description with the update to go the go get command is deprecated details starting with go get is deprecated but still seems to work it should be replaced with go install go install can either take a version identifier or nothing my understanding is that when it takes a version identifier it ignores the current go mod file and installs it globally when it takes no version number it picks up version number from go mod file and installs it locally we use go get to install the following go bindata used to read in the version also there is a comment to use go embed when updating the version number not sure what this is about either gocompat afaik this is a tool to re generate the gocompat file after changes to the interfaces protoc gen go used to generate go sources from proto file all are build time commands go bindata can most likely be replaced by
| 1
|
21,094
| 28,045,100,661
|
IssuesEvent
|
2023-03-28 21:59:10
|
Azure/azure-sdk-tools
|
https://api.github.com/repos/Azure/azure-sdk-tools
|
closed
|
Actions timing: Changes made by the user, while an action was processing and revert those changes
|
bug Central-EngSys GitHub Event Processor
|
Actions, take time to setup and run but during this time a user can make changes to an Issue or Pull Request. For example, a user can create an Issue with no labels and, after the issue has been created, immediately add labels. Meanwhile, the issues opened event has fired, the payload's issue is the issue in the state that it was when it was created, which means it had no labels. Part of the Initial Issue Triage processing adds at least one label. When an Issue is updated with an IssueUpdate, this isn't a merge, it's a replace as the labels in the IssueUpdate are what's set on the Issue. This results in removing the labels that were added by the user while the issues opened event was still processing. Right now, this could happen with just about any action.
This is the [example](https://github.com/Azure/azure-sdk-for-net/issues/35159#event-8856171762) that @jsquire pointed out to me.
[I was able to reproduce this in the test repository](https://github.com/azure-sdk/github-event-processor-test/issues/23#event-8856721763).
A largest part action's processing time is checking out the repository. Second to this would be the AZ CLI commands, which only run on issues opened. Even in the test repository, which has massively shorter sync times, this can definitely happen.
/CC @weshaggard @benbp @kurtzeborn
While it is possible to do this with FabricBot today, the window is much narrower, so much so that someone would have to be lightning quick in order to make this happen.
|
1.0
|
Actions timing: Changes made by the user, while an action was processing and revert those changes - Actions, take time to setup and run but during this time a user can make changes to an Issue or Pull Request. For example, a user can create an Issue with no labels and, after the issue has been created, immediately add labels. Meanwhile, the issues opened event has fired, the payload's issue is the issue in the state that it was when it was created, which means it had no labels. Part of the Initial Issue Triage processing adds at least one label. When an Issue is updated with an IssueUpdate, this isn't a merge, it's a replace as the labels in the IssueUpdate are what's set on the Issue. This results in removing the labels that were added by the user while the issues opened event was still processing. Right now, this could happen with just about any action.
This is the [example](https://github.com/Azure/azure-sdk-for-net/issues/35159#event-8856171762) that @jsquire pointed out to me.
[I was able to reproduce this in the test repository](https://github.com/azure-sdk/github-event-processor-test/issues/23#event-8856721763).
A largest part action's processing time is checking out the repository. Second to this would be the AZ CLI commands, which only run on issues opened. Even in the test repository, which has massively shorter sync times, this can definitely happen.
/CC @weshaggard @benbp @kurtzeborn
While it is possible to do this with FabricBot today, the window is much narrower, so much so that someone would have to be lightning quick in order to make this happen.
|
process
|
actions timing changes made by the user while an action was processing and revert those changes actions take time to setup and run but during this time a user can make changes to an issue or pull request for example a user can create an issue with no labels and after the issue has been created immediately add labels meanwhile the issues opened event has fired the payload s issue is the issue in the state that it was when it was created which means it had no labels part of the initial issue triage processing adds at least one label when an issue is updated with an issueupdate this isn t a merge it s a replace as the labels in the issueupdate are what s set on the issue this results in removing the labels that were added by the user while the issues opened event was still processing right now this could happen with just about any action this is the that jsquire pointed out to me a largest part action s processing time is checking out the repository second to this would be the az cli commands which only run on issues opened even in the test repository which has massively shorter sync times this can definitely happen cc weshaggard benbp kurtzeborn while it is possible to do this with fabricbot today the window is much narrower so much so that someone would have to be lightning quick in order to make this happen
| 1
|
8,195
| 11,394,170,157
|
IssuesEvent
|
2020-01-30 08:46:09
|
assimp/assimp
|
https://api.github.com/repos/assimp/assimp
|
closed
|
EmbedTexturesProcess leaks memory
|
Bug Postprocessing
|
The `EmbedTexturesProcess` class allocates new memory for `pScene->mTextures`, but doesn't deallocate the old memory:
https://github.com/assimp/assimp/blob/7b17cb8ddb57eeafb99b5a0de83ea2fc86c7fe67/code/PostProcessing/EmbedTexturesProcess.cpp#L129
|
1.0
|
EmbedTexturesProcess leaks memory - The `EmbedTexturesProcess` class allocates new memory for `pScene->mTextures`, but doesn't deallocate the old memory:
https://github.com/assimp/assimp/blob/7b17cb8ddb57eeafb99b5a0de83ea2fc86c7fe67/code/PostProcessing/EmbedTexturesProcess.cpp#L129
|
process
|
embedtexturesprocess leaks memory the embedtexturesprocess class allocates new memory for pscene mtextures but doesn t deallocate the old memory
| 1
|
39,209
| 19,730,339,381
|
IssuesEvent
|
2022-01-14 01:13:08
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
NNAPI on android 11 fails with movenet fp16 and int8 tflite models
|
comp:lite type:performance TF 2.4
|
<em>Please make sure that this is an issue related to performance of TensorFlow.
As per our
[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:performance_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): aarch64 Android 11
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary):
- TensorFlow version (use command below):
- Python version:
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
**Describe the current behavior**
I have issues while working with movenet tflite models on android 11 NNAPI 1.3
the movenet models used are sourced from tfhub:
1. [https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/float16/4](url)
2. [https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/int8/4](url)
the above two singlepose movenet lighting tflite models are float16 and INT8 respectively, and I was trying to perform benchmarking of the same using the prebuilt benchmark model for android_aarch64 sourced from tflite website:
[https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model](url)
I am easily able to benchmark the models on CPU and GPU, but when I try to run it on NNAPI, the benchmarking fails, which is interesting because even if the model is not supported by NNAPI delegate, it should fallback on CPU which is not happening. These models fail to execute on NNAPI CPU as well which is strange.
log for lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 992 while adding operation.
ERROR: Node number 303 (TfLiteNnapiDelegate) failed to prepare.
ERROR: Restored original execution plan after delegate application failure.
Failed to apply NNAPI delegate.
Benchmarking failed.
Node 303 is a GatherNd node, but I am not sure why it fails at this node because there are two more GatherNd nodes which come before node 303.
log for lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 992 while adding operation.
ERROR: Node number 162 (TfLiteNnapiDelegate) failed to prepare.
ERROR: Restored original execution plan after delegate application failure.
Failed to apply NNAPI delegate.
Benchmarking failed.
Node 162 for this model is a separable_conv2d/bias node.
the logcat files for both the operations are attached in logs section.
I get the same results if i remove the '--nnapi_accelerator_name=nnapi-reference', or add '--nnapi_allow_fp16=true' parameter, I still get the same benchmarking failed issue as above.
these movenet models work well with CPU and GPU:
1. lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite CPU 4 threads:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --num_threads=4
STARTING!
Log parameter values verbosely: [0]
Num threads: [4]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
#threads used for CPU inference: [4]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
The input model file size (MB): 2.89484
Initialized session in 4.574ms.
2. lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite GPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --use_gpu=1
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
Use gpu: [1]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
ARG_MAX: Operation is not supported.
CAST: Operation is not supported.
CONCATENATION: OP is supported, but tensor type isn't matched!
FLOOR_DIV: Operation is not supported.
GATHER_ND: Operation is not supported.
MUL: OP is supported, but tensor type isn't matched!
PACK: OP is supported, but tensor type isn't matched!
RESHAPE: OP is supported, but tensor type isn't matched!
SUB: OP is supported, but tensor type isn't matched!
UNPACK: Operation is not supported.
100 operations will run on the GPU, and the remaining 57 operations will run on the CPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
Explicitly applied GPU delegate, and the model graph will be partially executed by the delegate w/ 1 delegate kernels.
The input model file size (MB): 2.89484
Initialized session in 2957.12ms.
3. lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite CPU 4 threads:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --num_threads=4
STARTING!
Log parameter values verbosely: [0]
Num threads: [4]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
#threads used for CPU inference: [4]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
The input model file size (MB): 4.75851
Initialized session in 3.211ms.
4. lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite GPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --use_gpu=1
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
Use gpu: [1]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
ARG_MAX: Operation is not supported.
CAST: Operation is not supported.
CONCATENATION: OP is supported, but tensor type isn't matched!
DEQUANTIZE:
FLOOR_DIV: Operation is not supported.
GATHER_ND: Operation is not supported.
MUL: OP is supported, but tensor type isn't matched!
PACK: OP is supported, but tensor type isn't matched!
RESHAPE: OP is supported, but tensor type isn't matched!
SUB: OP is supported, but tensor type isn't matched!
UNPACK: Operation is not supported.
245 operations will run on the GPU, and the remaining 52 operations will run on the CPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
Explicitly applied GPU delegate, and the model graph will be partially executed by the delegate w/ 1 delegate kernels.
The input model file size (MB): 4.75851
Initialized session in 2013.4ms.
to make sure that NNAPI is working for other models, I used a mobilenetv2 fp16 and int8 model from tfhub
1. mobilenetv2-coco_fp16 : [https://tfhub.dev/sayakpaul/lite-model/mobilenetv2-coco/fp16/1](url)
2. mobilenetv2-coco_int8 : [https://tfhub.dev/sayakpaul/lite-model/mobilenetv2-coco/int8/1](url)
and i face no issues running NNAPI CPU.
output for mobilenetv2-coco/fp16 for NNAPI CPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_mobilenetv2-coco_fp16_1.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_mobilenetv2-coco_fp16_1.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_mobilenetv2-coco_fp16_1.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
Explicitly applied NNAPI delegate, and the model graph will be partially executed by the delegate w/ 11 delegate kernels.
The input model file size (MB): 4.2551
Initialized session in 99.601ms.
So there is something wrong in movenet models which is failing when using NNAPi instead of falling back onto CPU. One reason i can think of, after analysing the logfile is due to tensor type not matching for CONCATENATION op, but not sure.
**Describe the expected behavior**
The movenet models, even if not entirely supported on NNAPI should fallback on CPU. If fallback is disabled, but graph is forced through NNAPI CPU, it should still give similar results as it would have when running simply on CPU, but that is not observed.
**Other info / logs** Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
log files:
[lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite_android_nnapi_logcat.txt](https://github.com/tensorflow/tensorflow/files/7825438/lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite_android_nnapi_logcat.txt)
[lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite_android_nnapi_logcat.txt](https://github.com/tensorflow/tensorflow/files/7825446/lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite_android_nnapi_logcat.txt)
|
True
|
NNAPI on android 11 fails with movenet fp16 and int8 tflite models - <em>Please make sure that this is an issue related to performance of TensorFlow.
As per our
[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:performance_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): aarch64 Android 11
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary):
- TensorFlow version (use command below):
- Python version:
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
**Describe the current behavior**
I have issues while working with movenet tflite models on android 11 NNAPI 1.3
the movenet models used are sourced from tfhub:
1. [https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/float16/4](url)
2. [https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/int8/4](url)
the above two singlepose movenet lighting tflite models are float16 and INT8 respectively, and I was trying to perform benchmarking of the same using the prebuilt benchmark model for android_aarch64 sourced from tflite website:
[https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model](url)
I am easily able to benchmark the models on CPU and GPU, but when I try to run it on NNAPI, the benchmarking fails, which is interesting because even if the model is not supported by NNAPI delegate, it should fallback on CPU which is not happening. These models fail to execute on NNAPI CPU as well which is strange.
log for lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 992 while adding operation.
ERROR: Node number 303 (TfLiteNnapiDelegate) failed to prepare.
ERROR: Restored original execution plan after delegate application failure.
Failed to apply NNAPI delegate.
Benchmarking failed.
Node 303 is a GatherNd node, but I am not sure why it fails at this node because there are two more GatherNd nodes which come before node 303.
log for lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 992 while adding operation.
ERROR: Node number 162 (TfLiteNnapiDelegate) failed to prepare.
ERROR: Restored original execution plan after delegate application failure.
Failed to apply NNAPI delegate.
Benchmarking failed.
Node 162 for this model is a separable_conv2d/bias node.
the logcat files for both the operations are attached in logs section.
I get the same results if i remove the '--nnapi_accelerator_name=nnapi-reference', or add '--nnapi_allow_fp16=true' parameter, I still get the same benchmarking failed issue as above.
these movenet models work well with CPU and GPU:
1. lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite CPU 4 threads:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --num_threads=4
STARTING!
Log parameter values verbosely: [0]
Num threads: [4]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
#threads used for CPU inference: [4]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
The input model file size (MB): 2.89484
Initialized session in 4.574ms.
2. lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite GPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite --use_gpu=1
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite]
Use gpu: [1]
Loaded model lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
ARG_MAX: Operation is not supported.
CAST: Operation is not supported.
CONCATENATION: OP is supported, but tensor type isn't matched!
FLOOR_DIV: Operation is not supported.
GATHER_ND: Operation is not supported.
MUL: OP is supported, but tensor type isn't matched!
PACK: OP is supported, but tensor type isn't matched!
RESHAPE: OP is supported, but tensor type isn't matched!
SUB: OP is supported, but tensor type isn't matched!
UNPACK: Operation is not supported.
100 operations will run on the GPU, and the remaining 57 operations will run on the CPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
Explicitly applied GPU delegate, and the model graph will be partially executed by the delegate w/ 1 delegate kernels.
The input model file size (MB): 2.89484
Initialized session in 2957.12ms.
3. lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite CPU 4 threads:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --num_threads=4
STARTING!
Log parameter values verbosely: [0]
Num threads: [4]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
#threads used for CPU inference: [4]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
The input model file size (MB): 4.75851
Initialized session in 3.211ms.
4. lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite GPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite --use_gpu=1
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite]
Use gpu: [1]
Loaded model lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
ARG_MAX: Operation is not supported.
CAST: Operation is not supported.
CONCATENATION: OP is supported, but tensor type isn't matched!
DEQUANTIZE:
FLOOR_DIV: Operation is not supported.
GATHER_ND: Operation is not supported.
MUL: OP is supported, but tensor type isn't matched!
PACK: OP is supported, but tensor type isn't matched!
RESHAPE: OP is supported, but tensor type isn't matched!
SUB: OP is supported, but tensor type isn't matched!
UNPACK: Operation is not supported.
245 operations will run on the GPU, and the remaining 52 operations will run on the CPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
Explicitly applied GPU delegate, and the model graph will be partially executed by the delegate w/ 1 delegate kernels.
The input model file size (MB): 4.75851
Initialized session in 2013.4ms.
to make sure that NNAPI is working for other models, I used a mobilenetv2 fp16 and int8 model from tfhub
1. mobilenetv2-coco_fp16 : [https://tfhub.dev/sayakpaul/lite-model/mobilenetv2-coco/fp16/1](url)
2. mobilenetv2-coco_int8 : [https://tfhub.dev/sayakpaul/lite-model/mobilenetv2-coco/int8/1](url)
and i face no issues running NNAPI CPU.
output for mobilenetv2-coco/fp16 for NNAPI CPU:
> $ ./android_aarch64_benchmark_model --graph=lite-model_mobilenetv2-coco_fp16_1.tflite --use_nnapi=1 --nnapi_accelerator_name=nnapi-reference
STARTING!
Log parameter values verbosely: [0]
Graph: [lite-model_mobilenetv2-coco_fp16_1.tflite]
Use NNAPI: [1]
NNAPI accelerator name: [nnapi-reference]
NNAPI accelerators available: [nnapi-reference]
Loaded model lite-model_mobilenetv2-coco_fp16_1.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
Explicitly applied NNAPI delegate, and the model graph will be partially executed by the delegate w/ 11 delegate kernels.
The input model file size (MB): 4.2551
Initialized session in 99.601ms.
So there is something wrong in movenet models which is failing when using NNAPi instead of falling back onto CPU. One reason i can think of, after analysing the logfile is due to tensor type not matching for CONCATENATION op, but not sure.
**Describe the expected behavior**
The movenet models, even if not entirely supported on NNAPI should fallback on CPU. If fallback is disabled, but graph is forced through NNAPI CPU, it should still give similar results as it would have when running simply on CPU, but that is not observed.
**Other info / logs** Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
log files:
[lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite_android_nnapi_logcat.txt](https://github.com/tensorflow/tensorflow/files/7825438/lite-model_movenet_singlepose_lightning_tflite_float16_4.tflite_android_nnapi_logcat.txt)
[lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite_android_nnapi_logcat.txt](https://github.com/tensorflow/tensorflow/files/7825446/lite-model_movenet_singlepose_lightning_tflite_int8_4.tflite_android_nnapi_logcat.txt)
|
non_process
|
nnapi on android fails with movenet and tflite models please make sure that this is an issue related to performance of tensorflow as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag performance template system information have i written custom code as opposed to using a stock example script provided in tensorflow no os platform and distribution e g linux ubuntu android mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device tensorflow installed from source or binary tensorflow version use command below python version bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory describe the current behavior i have issues while working with movenet tflite models on android nnapi the movenet models used are sourced from tfhub url url the above two singlepose movenet lighting tflite models are and respectively and i was trying to perform benchmarking of the same using the prebuilt benchmark model for android sourced from tflite website url i am easily able to benchmark the models on cpu and gpu but when i try to run it on nnapi the benchmarking fails which is interesting because even if the model is not supported by nnapi delegate it should fallback on cpu which is not happening these models fail to execute on nnapi cpu as well which is strange log for lite model movenet singlepose lightning tflite tflite android benchmark model graph lite model movenet singlepose lightning tflite tflite use nnapi nnapi accelerator name nnapi reference starting log parameter values verbosely graph use nnapi nnapi accelerator name nnapi accelerators available loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime info created tensorflow lite delegate for nnapi error nn api returned error aneuralnetworks bad data at line while adding operation error node number tflitennapidelegate failed to prepare error restored original execution plan after delegate application failure failed to apply nnapi delegate benchmarking failed node is a gathernd node but i am not sure why it fails at this node because there are two more gathernd nodes which come before node log for lite model movenet singlepose lightning tflite tflite android benchmark model graph lite model movenet singlepose lightning tflite tflite use nnapi nnapi accelerator name nnapi reference starting log parameter values verbosely graph use nnapi nnapi accelerator name nnapi accelerators available loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime info created tensorflow lite delegate for nnapi error nn api returned error aneuralnetworks bad data at line while adding operation error node number tflitennapidelegate failed to prepare error restored original execution plan after delegate application failure failed to apply nnapi delegate benchmarking failed node for this model is a separable bias node the logcat files for both the operations are attached in logs section i get the same results if i remove the nnapi accelerator name nnapi reference or add nnapi allow true parameter i still get the same benchmarking failed issue as above these movenet models work well with cpu and gpu lite model movenet singlepose lightning tflite tflite cpu threads android benchmark model graph lite model movenet singlepose lightning tflite tflite num threads starting log parameter values verbosely num threads graph threads used for cpu inference loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime the input model file size mb initialized session in lite model movenet singlepose lightning tflite tflite gpu android benchmark model graph lite model movenet singlepose lightning tflite tflite use gpu starting log parameter values verbosely graph use gpu loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime info created tensorflow lite delegate for gpu error following operations are not supported by gpu delegate arg max operation is not supported cast operation is not supported concatenation op is supported but tensor type isn t matched floor div operation is not supported gather nd operation is not supported mul op is supported but tensor type isn t matched pack op is supported but tensor type isn t matched reshape op is supported but tensor type isn t matched sub op is supported but tensor type isn t matched unpack operation is not supported operations will run on the gpu and the remaining operations will run on the cpu info initialized opencl based api info created gpu delegate kernels explicitly applied gpu delegate and the model graph will be partially executed by the delegate w delegate kernels the input model file size mb initialized session in lite model movenet singlepose lightning tflite tflite cpu threads android benchmark model graph lite model movenet singlepose lightning tflite tflite num threads starting log parameter values verbosely num threads graph threads used for cpu inference loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime the input model file size mb initialized session in lite model movenet singlepose lightning tflite tflite gpu android benchmark model graph lite model movenet singlepose lightning tflite tflite use gpu starting log parameter values verbosely graph use gpu loaded model lite model movenet singlepose lightning tflite tflite info initialized tensorflow lite runtime info created tensorflow lite delegate for gpu error following operations are not supported by gpu delegate arg max operation is not supported cast operation is not supported concatenation op is supported but tensor type isn t matched dequantize floor div operation is not supported gather nd operation is not supported mul op is supported but tensor type isn t matched pack op is supported but tensor type isn t matched reshape op is supported but tensor type isn t matched sub op is supported but tensor type isn t matched unpack operation is not supported operations will run on the gpu and the remaining operations will run on the cpu info initialized opencl based api info created gpu delegate kernels explicitly applied gpu delegate and the model graph will be partially executed by the delegate w delegate kernels the input model file size mb initialized session in to make sure that nnapi is working for other models i used a and model from tfhub coco url coco url and i face no issues running nnapi cpu output for coco for nnapi cpu android benchmark model graph lite model coco tflite use nnapi nnapi accelerator name nnapi reference starting log parameter values verbosely graph use nnapi nnapi accelerator name nnapi accelerators available loaded model lite model coco tflite info initialized tensorflow lite runtime info created tensorflow lite delegate for nnapi explicitly applied nnapi delegate and the model graph will be partially executed by the delegate w delegate kernels the input model file size mb initialized session in so there is something wrong in movenet models which is failing when using nnapi instead of falling back onto cpu one reason i can think of after analysing the logfile is due to tensor type not matching for concatenation op but not sure describe the expected behavior the movenet models even if not entirely supported on nnapi should fallback on cpu if fallback is disabled but graph is forced through nnapi cpu it should still give similar results as it would have when running simply on cpu but that is not observed other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached log files
| 0
|
10,558
| 13,350,074,229
|
IssuesEvent
|
2020-08-30 05:38:06
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Backend provides API to retrieve list of available log types
|
p1 story team:data processing
|
### Description
Backend provides API to retrieve list of available log types
### Acceptance Criteria
- Backend provides API to retrieve list of available log types
|
1.0
|
Backend provides API to retrieve list of available log types - ### Description
Backend provides API to retrieve list of available log types
### Acceptance Criteria
- Backend provides API to retrieve list of available log types
|
process
|
backend provides api to retrieve list of available log types description backend provides api to retrieve list of available log types acceptance criteria backend provides api to retrieve list of available log types
| 1
|
182,232
| 6,668,203,779
|
IssuesEvent
|
2017-10-03 15:04:32
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
reopened
|
Collection view
|
component: collections needs: ux priority: mvp project: 2017 Q3 project: desktop pages state: pull request ready triaged
|
Refs: https://github.com/mozilla/addons-frontend/issues/2782
Mocks are in the Sketch file:
<img width="710" alt="amo_desktop_-_master_sketch" src="https://user-images.githubusercontent.com/1514/30435430-a78733b4-9961-11e7-9d55-6077548b6293.png">
|
1.0
|
Collection view - Refs: https://github.com/mozilla/addons-frontend/issues/2782
Mocks are in the Sketch file:
<img width="710" alt="amo_desktop_-_master_sketch" src="https://user-images.githubusercontent.com/1514/30435430-a78733b4-9961-11e7-9d55-6077548b6293.png">
|
non_process
|
collection view refs mocks are in the sketch file img width alt amo desktop master sketch src
| 0
|
13,769
| 16,527,480,727
|
IssuesEvent
|
2021-05-26 22:25:40
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
bigtable: introduce resource cleanup to integration tests
|
api: bigtable type: process
|
Today, some of the bigtable integration tests create instances. It would be a good add to, as part of TestMain, to clean these up if they are stale. This may require changing naming a bit and including some form of a timestamp so we can detect stale instances easily.
|
1.0
|
bigtable: introduce resource cleanup to integration tests - Today, some of the bigtable integration tests create instances. It would be a good add to, as part of TestMain, to clean these up if they are stale. This may require changing naming a bit and including some form of a timestamp so we can detect stale instances easily.
|
process
|
bigtable introduce resource cleanup to integration tests today some of the bigtable integration tests create instances it would be a good add to as part of testmain to clean these up if they are stale this may require changing naming a bit and including some form of a timestamp so we can detect stale instances easily
| 1
|
20,601
| 27,266,189,823
|
IssuesEvent
|
2023-02-22 18:13:52
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
opened
|
Create backend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the backend.
|
IO Task Processing Task
|
Tests
------
1. Research what systems will be running on the UB webserver.
2. Research how cheshire will be accessed and how programs will be placed on the server.
3. Research how the database will be created, accessed, updated, and maintained.
4. Create the documentation via Google Docs for easier sharing and more concurrent updates.
5. Proofread to find grammatical errors as well as any incorrect code snippets
6. Follow documentation as if from the outside and run code snippets to see if outcome is as expected.
7. Create task on ZenHub with collated and organized documentation linked.
[https://docs.google.com/document/d/1oRdNRbrfuvt2v9fK8POguemvKRa1VSf3GjAfRipbw5Y/edit](url)
|
1.0
|
Create backend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the backend. - Tests
------
1. Research what systems will be running on the UB webserver.
2. Research how cheshire will be accessed and how programs will be placed on the server.
3. Research how the database will be created, accessed, updated, and maintained.
4. Create the documentation via Google Docs for easier sharing and more concurrent updates.
5. Proofread to find grammatical errors as well as any incorrect code snippets
6. Follow documentation as if from the outside and run code snippets to see if outcome is as expected.
7. Create task on ZenHub with collated and organized documentation linked.
[https://docs.google.com/document/d/1oRdNRbrfuvt2v9fK8POguemvKRa1VSf3GjAfRipbw5Y/edit](url)
|
process
|
create backend documentation in order to collate and organize instructions for easier on boarding and general guidance in the backend tests research what systems will be running on the ub webserver research how cheshire will be accessed and how programs will be placed on the server research how the database will be created accessed updated and maintained create the documentation via google docs for easier sharing and more concurrent updates proofread to find grammatical errors as well as any incorrect code snippets follow documentation as if from the outside and run code snippets to see if outcome is as expected create task on zenhub with collated and organized documentation linked url
| 1
|
15,377
| 19,562,414,135
|
IssuesEvent
|
2022-01-03 18:07:05
|
Kernem/FeRSS-Core
|
https://api.github.com/repos/Kernem/FeRSS-Core
|
closed
|
Create a useful struct for storing RSS Content
|
fetching post-processing
|
After #13 the RSS content is stored inside their respective channels, which are then again stored within a vector. This structure might prove inconvenient when attempting to do any sort of post-processing such as sort or filter, which is being requested by #4, #5, #6, #7, #8
|
1.0
|
Create a useful struct for storing RSS Content - After #13 the RSS content is stored inside their respective channels, which are then again stored within a vector. This structure might prove inconvenient when attempting to do any sort of post-processing such as sort or filter, which is being requested by #4, #5, #6, #7, #8
|
process
|
create a useful struct for storing rss content after the rss content is stored inside their respective channels which are then again stored within a vector this structure might prove inconvenient when attempting to do any sort of post processing such as sort or filter which is being requested by
| 1
|
41,477
| 6,912,253,547
|
IssuesEvent
|
2017-11-28 11:17:19
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
closed
|
Minishift architecture diagram is not rendering due to permission issue
|
component/documentation kind/bug priority/major status/needs-investigation
|
Again getting permission issue for image rendering. It was fixed as part of https://github.com/minishift/minishift/pull/1700 but not sure what happened after latest sync.
Link - https://docs.openshift.org/latest/minishift/using/basic-usage.html

|
1.0
|
Minishift architecture diagram is not rendering due to permission issue - Again getting permission issue for image rendering. It was fixed as part of https://github.com/minishift/minishift/pull/1700 but not sure what happened after latest sync.
Link - https://docs.openshift.org/latest/minishift/using/basic-usage.html

|
non_process
|
minishift architecture diagram is not rendering due to permission issue again getting permission issue for image rendering it was fixed as part of but not sure what happened after latest sync link
| 0
|
159,123
| 13,756,273,914
|
IssuesEvent
|
2020-10-06 19:40:24
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Review of new short VSP policy (documenting our desire that VFS not use password protection)
|
content-ia-team documentation-support
|
### What is your documentation request, question, comment, or issue?
I wrote up some new policy, about how we'd prefer VFS teams not use password protection for VA.gov features, and I'd like to communicate this stance to DEPO, VSP, and VFS teams. The other DEPO leads are reviewing the content to see what suggestions they have to adjust the message, but I thought I'd go ahead and get it queued up with the Content & IA team as well in case a simultaneous tone and IA check can be done.
https://github.com/department-of-veterans-affairs/va.gov-team/blob/pw-protection-usage/platform/working-with-vsp/policies-work-norms/usage-of-password-protection.md
Thank you!
|
1.0
|
Review of new short VSP policy (documenting our desire that VFS not use password protection) - ### What is your documentation request, question, comment, or issue?
I wrote up some new policy, about how we'd prefer VFS teams not use password protection for VA.gov features, and I'd like to communicate this stance to DEPO, VSP, and VFS teams. The other DEPO leads are reviewing the content to see what suggestions they have to adjust the message, but I thought I'd go ahead and get it queued up with the Content & IA team as well in case a simultaneous tone and IA check can be done.
https://github.com/department-of-veterans-affairs/va.gov-team/blob/pw-protection-usage/platform/working-with-vsp/policies-work-norms/usage-of-password-protection.md
Thank you!
|
non_process
|
review of new short vsp policy documenting our desire that vfs not use password protection what is your documentation request question comment or issue i wrote up some new policy about how we d prefer vfs teams not use password protection for va gov features and i d like to communicate this stance to depo vsp and vfs teams the other depo leads are reviewing the content to see what suggestions they have to adjust the message but i thought i d go ahead and get it queued up with the content ia team as well in case a simultaneous tone and ia check can be done thank you
| 0
|
334,964
| 10,147,409,082
|
IssuesEvent
|
2019-08-05 10:29:42
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
closed
|
unexpected EOF while looking for matching when running ssh on 1.30.0
|
kind/bug os/windows priority/major status/stale
|
### General information
* Minishift version: minishift v1.30.0+186b034
* OS: Windows 10
* Hypervisor: VirtualBox
### Steps to reproduce
1. Install minishift on windows 10 machine
2. minishift start --vm-driver virtualbox
### Expected
Install success
### Actual
```
C:\Users\gmuselli>minishift start --vm-driver virtualbox
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 4 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ................................. FAIL E0129 10:10:17.819625 24216 start.go:494] Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%!s(MISSING)' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
C:\Users\gmuselli>minishift version
minishift v1.30.0+186b034
```
### Logs
```
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl -f stop docker
.SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
if [ ! -z "$(ip link show docker0)" ]; then sudo ip link delete docker0; fi
SSH cmd err, output: <nil>:
Copying certs to the remote machine...
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
SSH cmd err, output: exit status 1: bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. FAIL E0129 10:27:15.374831 10196 start.go:494] Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%!s(MISSING)' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
```
For information I already got this error on previous versions
|
1.0
|
unexpected EOF while looking for matching when running ssh on 1.30.0 - ### General information
* Minishift version: minishift v1.30.0+186b034
* OS: Windows 10
* Hypervisor: VirtualBox
### Steps to reproduce
1. Install minishift on windows 10 machine
2. minishift start --vm-driver virtualbox
### Expected
Install success
### Actual
```
C:\Users\gmuselli>minishift start --vm-driver virtualbox
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 4 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ................................. FAIL E0129 10:10:17.819625 24216 start.go:494] Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%!s(MISSING)' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
C:\Users\gmuselli>minishift version
minishift v1.30.0+186b034
```
### Logs
```
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl -f stop docker
.SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
if [ ! -z "$(ip link show docker0)" ]; then sudo ip link delete docker0; fi
SSH cmd err, output: <nil>:
Copying certs to the remote machine...
(minishift) Calling .GetSSHHostname
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\gmuselli\.minishift\machines\minishift\id_rsa -p 50357] C:\ProgramData\chocolatey\bin\ssh.exe <nil>}
About to run SSH command:
printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
SSH cmd err, output: exit status 1: bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. FAIL E0129 10:27:15.374831 10196 start.go:494] Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%s' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error running provisioning: ssh command error:
command : printf '%!s(MISSING)' '-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIQCUqwOKKObW9VpW7gUJZjujANBgkqhkiG9w0BAQsFADAT
MREwDwYDVQQKEwhnbXVzZWxsaTAeFw0xOTAxMjgxMzQzMDBaFw0yMjAxMTIxMzQz
MDBaMBMxETAPBgNVBAoTCGdtdXNlbGxpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA+ApFgu1MgF5DtrdM4iuCHhw4RTvL8hP//6wjPhDElmKGJEr4GVO8
JHR8yyXio+79V4S93mNkaVkjJxINpJGG/ArsSx+EV73BT4v74RQWEUy5X9BTv2px
uxuzNQtJ3y7WoAQTFQmzCid7UlfXsOeOrvglj2aSKfiRr+vrjk/TiaSYgHrF9a1b
11kLUAAXqnR93U8strSfquDwjn75/xGE3fH2ktWS+Z/wmtlkhLahOlw5EcTZqI8m
NN/v+AU1wOJNpYwAwYwhlC2ok6QmrQMogaJtCgdgwkvbEo99BJUN0bzKSU8suBXD
EpMq3sT7/ZB0mPmFrhTH1pZhEQZ1pkcAXQIDAQABoyMwITAOBgNVHQ8BAf8EBAMC
AqwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEA6oYPZJmpMyAX
5h4vfKbLmaTiuhaQybZZ3WL79PYmeBtl6xbAxC+EyE2twN3VQziLjHDy960pTDGj
zmqbbjqUnS/3zJZIhsCz9flN5RRQOesW51mB5S+TnTy406RazM+U4ocz2AGUgGYy
M+t+krpvCzyLxIYi7hmvVieEavhYxsMDrfMYbq6G9jZDMM3mkF7HFMnhZv01kP7g
adShK4DofmDr9Q737e/jiKfbwzHwgcV3S502FjkO7xz3CxdCUQb5YOwmLsc7uuyr
/wT0fULn4sNPKGwN0vPCv18jdtpyX9Y5B7EXMO3HAjXoOk+0RcmWBJ/E/zamI0Ui
DZ1pfR942g==
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err : exit status 1
output : bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
```
For information I already got this error on previous versions
|
non_process
|
unexpected eof while looking for matching when running ssh on general information minishift version minishift os windows hypervisor virtualbox steps to reproduce install minishift on windows machine minishift start vm driver virtualbox expected install success actual c users gmuselli minishift start vm driver virtualbox starting profile minishift check if deprecated options are used ok checking if is reachable ok checking if requested openshift version is valid ok checking if requested openshift version is supported ok checking if requested hypervisor virtualbox is supported on this platform ok checking if virtualbox is installed ok checking the iso url ok checking if provided oc flags are supported ok starting the openshift cluster using virtualbox hypervisor minishift vm will be configured with memory gb vcpus disk size gb starting minishift vm fail start go error starting the vm error creating the vm error creating machine error running provisioning ssh command error command printf s begin certificate miibcgkcaqea arssx vrjk z nn v bauwaweb zmqbbjquns m t e end certificate sudo tee etc docker ca pem err exit status output bash c line unexpected eof while looking for matching bash c line syntax error unexpected end of file retrying error starting the vm error creating the vm error creating machine error running provisioning ssh command error command printf s missing begin certificate miibcgkcaqea arssx vrjk z nn v bauwaweb zmqbbjquns m t e end certificate sudo tee etc docker ca pem err exit status output bash c line unexpected eof while looking for matching bash c line syntax error unexpected end of file c users gmuselli minishift version minishift logs minishift calling getsshhostname minishift calling getsshport minishift calling getsshkeypath minishift calling getsshkeypath minishift calling getsshusername using ssh client type external c programdata chocolatey bin ssh exe about to run ssh command sudo systemctl f stop docker ssh cmd err output minishift calling getsshhostname minishift calling getsshport minishift calling getsshkeypath minishift calling getsshkeypath minishift calling getsshusername using ssh client type external c programdata chocolatey bin ssh exe about to run ssh command if then sudo ip link delete fi ssh cmd err output copying certs to the remote machine minishift calling getsshhostname minishift calling getsshport minishift calling getsshkeypath minishift calling getsshkeypath minishift calling getsshusername using ssh client type external c programdata chocolatey bin ssh exe about to run ssh command printf s begin certificate miibcgkcaqea arssx vrjk z nn v bauwaweb zmqbbjquns m t e end certificate sudo tee etc docker ca pem ssh cmd err output exit status bash c line unexpected eof while looking for matching bash c line syntax error unexpected end of file fail start go error starting the vm error creating the vm error creating machine error running provisioning ssh command error command printf s begin certificate miibcgkcaqea arssx vrjk z nn v bauwaweb zmqbbjquns m t e end certificate sudo tee etc docker ca pem err exit status output bash c line unexpected eof while looking for matching bash c line syntax error unexpected end of file retrying error starting the vm error creating the vm error creating machine error running provisioning ssh command error command printf s missing begin certificate miibcgkcaqea arssx vrjk z nn v bauwaweb zmqbbjquns m t e end certificate sudo tee etc docker ca pem err exit status output bash c line unexpected eof while looking for matching bash c line syntax error unexpected end of file for information i already got this error on previous versions
| 0
|
487,248
| 14,021,068,559
|
IssuesEvent
|
2020-10-29 20:40:40
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Desktop mode can't be disabled on nightly
|
OS/Android QA/Yes bug feature/settings priority/P2 regression release/blocking
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description <!-- Provide a brief description of the issue -->
Desktop mode can't be disabled on nightly
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Install 1.17.1 on tablet
2. Open a page and open menu, desktop mode is enabled
3. Try to uncheck the setting, doesn't work
4. Go to site settings, desktop mode is disabled (default value)
## Actual result <!-- Please add screenshots if needed -->
https://youtu.be/xZo1coXT1RI
## Expected result
Should not enable desktop mode by default and should be able to disable it if enabled manually
## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
Easy
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current Play Store version? No
- Can you reproduce this issue with the current Play Store Beta version? No
- Can you reproduce this issue with the current Play Store Nightly version? Yes
## Device details
- Install type (ARM, x86): ARM
- Device type (Phone, Tablet, Phablet): All
- Android version: 10
## Brave version
1.17.1
### Website problems only
- Does the issue resolve itself when disabling Brave Shields? NA
- Does the issue resolve itself when disabling Brave Rewards? NA
- Is the issue reproducible on the latest version of Chrome? NA
### Additional information
<!-- Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue -->
Note: On mobile, clicking Desktop mode doesn't select the option but page reloads to desktop mode. Clicking the option again doesn't revert back to mobile view
|
1.0
|
Desktop mode can't be disabled on nightly - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description <!-- Provide a brief description of the issue -->
Desktop mode can't be disabled on nightly
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Install 1.17.1 on tablet
2. Open a page and open menu, desktop mode is enabled
3. Try to uncheck the setting, doesn't work
4. Go to site settings, desktop mode is disabled (default value)
## Actual result <!-- Please add screenshots if needed -->
https://youtu.be/xZo1coXT1RI
## Expected result
Should not enable desktop mode by default and should be able to disable it if enabled manually
## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
Easy
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current Play Store version? No
- Can you reproduce this issue with the current Play Store Beta version? No
- Can you reproduce this issue with the current Play Store Nightly version? Yes
## Device details
- Install type (ARM, x86): ARM
- Device type (Phone, Tablet, Phablet): All
- Android version: 10
## Brave version
1.17.1
### Website problems only
- Does the issue resolve itself when disabling Brave Shields? NA
- Does the issue resolve itself when disabling Brave Rewards? NA
- Is the issue reproducible on the latest version of Chrome? NA
### Additional information
<!-- Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue -->
Note: On mobile, clicking Desktop mode doesn't select the option but page reloads to desktop mode. Clicking the option again doesn't revert back to mobile view
|
non_process
|
desktop mode can t be disabled on nightly have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description desktop mode can t be disabled on nightly steps to reproduce install on tablet open a page and open menu desktop mode is enabled try to uncheck the setting doesn t work go to site settings desktop mode is disabled default value actual result expected result should not enable desktop mode by default and should be able to disable it if enabled manually issue reproduces how often easy version channel information can you reproduce this issue with the current play store version no can you reproduce this issue with the current play store beta version no can you reproduce this issue with the current play store nightly version yes device details install type arm arm device type phone tablet phablet all android version brave version website problems only does the issue resolve itself when disabling brave shields na does the issue resolve itself when disabling brave rewards na is the issue reproducible on the latest version of chrome na additional information note on mobile clicking desktop mode doesn t select the option but page reloads to desktop mode clicking the option again doesn t revert back to mobile view
| 0
|
88,422
| 10,571,466,971
|
IssuesEvent
|
2019-10-07 07:13:03
|
SenseNet/sensenet
|
https://api.github.com/repos/SenseNet/sensenet
|
closed
|
Port documentation from wiki: .Net API
|
documentation hacktoberfest
|
Port old articles from wiki and refresh them for sensenet 7. The new article should be an .md file in the sensenet repository's _/docs_ folder. Old screenshots should be remaked on the new [admin-ui](https://admin.sensenet.com) if it is possible and placed into the _/docs/images_ folder.
- [ ] [.Net API](http://wiki.sensenet.com/Most_common_API_calls)
|
1.0
|
Port documentation from wiki: .Net API - Port old articles from wiki and refresh them for sensenet 7. The new article should be an .md file in the sensenet repository's _/docs_ folder. Old screenshots should be remaked on the new [admin-ui](https://admin.sensenet.com) if it is possible and placed into the _/docs/images_ folder.
- [ ] [.Net API](http://wiki.sensenet.com/Most_common_API_calls)
|
non_process
|
port documentation from wiki net api port old articles from wiki and refresh them for sensenet the new article should be an md file in the sensenet repository s docs folder old screenshots should be remaked on the new if it is possible and placed into the docs images folder
| 0
|
242,926
| 26,277,870,783
|
IssuesEvent
|
2023-01-07 01:22:37
|
murthy1979/hackazon
|
https://api.github.com/repos/murthy1979/hackazon
|
closed
|
CVE-2018-11386 (Medium) detected in symfony/http-foundation-v2.6.1 - autoclosed
|
security vulnerability
|
## CVE-2018-11386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfony/http-foundation-v2.6.1</b></p></summary>
<p>Symfony HttpFoundation Component</p>
<p>Library home page: <a href="https://api.github.com/repos/symfony/http-foundation/zipball/0109221f3cf012bf027768ad3e4236dae1af5332">https://api.github.com/repos/symfony/http-foundation/zipball/0109221f3cf012bf027768ad3e4236dae1af5332</a></p>
<p>
Dependency Hierarchy:
- symfony/form-v2.5.7 (Root Library)
- :x: **symfony/http-foundation-v2.6.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/murthy1979/hackazon/commit/7a5c1fb6205b5dacb816770c95cda9299805eb02">7a5c1fb6205b5dacb816770c95cda9299805eb02</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the HttpFoundation component in Symfony 2.7.x before 2.7.48, 2.8.x before 2.8.41, 3.3.x before 3.3.17, 3.4.x before 3.4.11, and 4.0.x before 4.0.11. The PDOSessionHandler class allows storing sessions on a PDO connection. Under some configurations and with a well-crafted payload, it was possible to do a denial of service on a Symfony application without too much resources.
<p>Publish Date: 2018-06-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11386>CVE-2018-11386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11386">https://nvd.nist.gov/vuln/detail/CVE-2018-11386</a></p>
<p>Release Date: 2018-06-13</p>
<p>Fix Resolution: 2.7.48,2.8.41,3.3.17,3.4.11,4.0.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11386 (Medium) detected in symfony/http-foundation-v2.6.1 - autoclosed - ## CVE-2018-11386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfony/http-foundation-v2.6.1</b></p></summary>
<p>Symfony HttpFoundation Component</p>
<p>Library home page: <a href="https://api.github.com/repos/symfony/http-foundation/zipball/0109221f3cf012bf027768ad3e4236dae1af5332">https://api.github.com/repos/symfony/http-foundation/zipball/0109221f3cf012bf027768ad3e4236dae1af5332</a></p>
<p>
Dependency Hierarchy:
- symfony/form-v2.5.7 (Root Library)
- :x: **symfony/http-foundation-v2.6.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/murthy1979/hackazon/commit/7a5c1fb6205b5dacb816770c95cda9299805eb02">7a5c1fb6205b5dacb816770c95cda9299805eb02</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the HttpFoundation component in Symfony 2.7.x before 2.7.48, 2.8.x before 2.8.41, 3.3.x before 3.3.17, 3.4.x before 3.4.11, and 4.0.x before 4.0.11. The PDOSessionHandler class allows storing sessions on a PDO connection. Under some configurations and with a well-crafted payload, it was possible to do a denial of service on a Symfony application without too much resources.
<p>Publish Date: 2018-06-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11386>CVE-2018-11386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11386">https://nvd.nist.gov/vuln/detail/CVE-2018-11386</a></p>
<p>Release Date: 2018-06-13</p>
<p>Fix Resolution: 2.7.48,2.8.41,3.3.17,3.4.11,4.0.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in symfony http foundation autoclosed cve medium severity vulnerability vulnerable library symfony http foundation symfony httpfoundation component library home page a href dependency hierarchy symfony form root library x symfony http foundation vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in the httpfoundation component in symfony x before x before x before x before and x before the pdosessionhandler class allows storing sessions on a pdo connection under some configurations and with a well crafted payload it was possible to do a denial of service on a symfony application without too much resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
13,498
| 16,024,457,907
|
IssuesEvent
|
2021-04-21 07:16:17
|
JitenPalaparthi/readyGo
|
https://api.github.com/repos/JitenPalaparthi/readyGo
|
opened
|
Implement proper GitHub branching strategy
|
continuous process enhancement feature:v0.1.x feature:v0.2.X feature:v0.3.0 feature:v0.4.0 improvement
|
So far , since I am only the developer , I push to master or for the purpose I write feature branches and then push to master.It has to be changed.
There should be a proper branching strategy.
There should be a proper release strategy.
|
1.0
|
Implement proper GitHub branching strategy - So far , since I am only the developer , I push to master or for the purpose I write feature branches and then push to master.It has to be changed.
There should be a proper branching strategy.
There should be a proper release strategy.
|
process
|
implement proper github branching strategy so far since i am only the developer i push to master or for the purpose i write feature branches and then push to master it has to be changed there should be a proper branching strategy there should be a proper release strategy
| 1
|
8,667
| 11,802,526,595
|
IssuesEvent
|
2020-03-18 21:45:26
|
home-assistant/core
|
https://api.github.com/repos/home-assistant/core
|
closed
|
ImportError: cannot import name 'UnidentifiedImageError' from 'PIL' after upgrade to 0.107.0
|
integration: image_processing integration: tensorflow
|
<!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/home-assistant/releases
- Do not report issues for integrations if you are using custom components or integrations.
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
-->
## The problem
<!--
Describe the issue you are experiencing here to communicate to the
maintainers. Tell us what you were trying to do and what happened instead.
-->
Seeing image_processing errors in error log since upgrade to 107.0:
`ImportError: cannot import name 'UnidentifiedImageError' from 'PIL'`
## Environment
<!--
Provide details about the versions you are using, which helps us to reproduce
and find the issue quicker. Version information is found in the
Home Assistant frontend: Developer tools -> Info.
-->
arch | x86_64
-- | --
dev | false
docker | false
hassio | false
os_name | Linux
os_version | 5.3.0-40-generic
python_version | 3.7.5
timezone | America/Chicago
version | 0.107.0
virtualenv | true
- Home Assistant release with the issue: 0.107.0
- Last working Home Assistant release (if known): 0.104
- Operating environment (Hass.io/Docker/Windows/etc.): Ubuntu
- Integration causing this issue: image_processing
- Link to integration documentation on our website: https://www.home-assistant.io/integrations/image_processing/
## Problem-relevant `configuration.yaml`
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```yaml
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.carport_static
name: tensorflow.carport
file_out:
- "/srv/homeassistant/.homeassistant/www/carport_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- cat
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.master_bath_door_static
name: tensorflow.master_bath_door
file_out:
- "/srv/homeassistant/.homeassistant/www/master_bath_door_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.drive_gate_static
name: tensorflow.drive_gate
file_out:
- "/srv/homeassistant/.homeassistant/www/drive_gate_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
area:
top: 0.1
categories:
- person
- car
- truck
- motorcycle
- dog
- cat
- platform: tensorflow
confidence: 90
scan_interval: 3
source:
- entity_id: camera.front_door_static
name: tensorflow.front_door
file_out:
- "/srv/homeassistant/.homeassistant/www/front_door_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.back_yard_static
name: tensorflow.back_yard
file_out:
- "/srv/homeassistant/.homeassistant/www/back_yard_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- platform: tensorflow
confidence: 95
scan_interval: 4
source:
- entity_id: camera.front_yard_static
name: tensorflow.front_yard
file_out:
- "/srv/homeassistant/.homeassistant/www/front_yard_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
```
## Traceback/Error logs
<!--
If you come across any trace or error logs, please provide them.
-->
```txt
Mar 18 16:14:19 ha hass[3014]: 2020-03-18 16:14:19 ERROR (MainThread) [homeassistant.config] Platform error: image_processing
Mar 18 16:14:19 ha hass[3014]: Traceback (most recent call last):
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/config.py", line 752, in async_process_component_config
Mar 18 16:14:19 ha hass[3014]: platform = p_integration.get_platform(domain)
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/loader.py", line 277, in get_platform
Mar 18 16:14:19 ha hass[3014]: f"{self.pkg_path}.{platform_name}"
Mar 18 16:14:19 ha hass[3014]: File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
Mar 18 16:14:19 ha hass[3014]: return _bootstrap._gcd_import(name[level:], package, level)
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 983, in _find_and_load
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap_external>", line 728, in exec_module
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/components/tensorflow/image_processing.py", line 7, in <module>
Mar 18 16:14:19 ha hass[3014]: from PIL import Image, ImageDraw, UnidentifiedImageError
Mar 18 16:14:19 ha hass[3014]: ImportError: cannot import name 'UnidentifiedImageError' from 'PIL' (/srv/homeassistant/venv/lib/python3.7/site-packages/PIL/__init__.py)
```
## Additional information
I did verify that Pillow v7.0.0 was loaded
|
1.0
|
ImportError: cannot import name 'UnidentifiedImageError' from 'PIL' after upgrade to 0.107.0 - <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/home-assistant/releases
- Do not report issues for integrations if you are using custom components or integrations.
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
-->
## The problem
<!--
Describe the issue you are experiencing here to communicate to the
maintainers. Tell us what you were trying to do and what happened instead.
-->
Seeing image_processing errors in error log since upgrade to 107.0:
`ImportError: cannot import name 'UnidentifiedImageError' from 'PIL'`
## Environment
<!--
Provide details about the versions you are using, which helps us to reproduce
and find the issue quicker. Version information is found in the
Home Assistant frontend: Developer tools -> Info.
-->
arch | x86_64
-- | --
dev | false
docker | false
hassio | false
os_name | Linux
os_version | 5.3.0-40-generic
python_version | 3.7.5
timezone | America/Chicago
version | 0.107.0
virtualenv | true
- Home Assistant release with the issue: 0.107.0
- Last working Home Assistant release (if known): 0.104
- Operating environment (Hass.io/Docker/Windows/etc.): Ubuntu
- Integration causing this issue: image_processing
- Link to integration documentation on our website: https://www.home-assistant.io/integrations/image_processing/
## Problem-relevant `configuration.yaml`
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```yaml
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.carport_static
name: tensorflow.carport
file_out:
- "/srv/homeassistant/.homeassistant/www/carport_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- cat
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.master_bath_door_static
name: tensorflow.master_bath_door
file_out:
- "/srv/homeassistant/.homeassistant/www/master_bath_door_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.drive_gate_static
name: tensorflow.drive_gate
file_out:
- "/srv/homeassistant/.homeassistant/www/drive_gate_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
area:
top: 0.1
categories:
- person
- car
- truck
- motorcycle
- dog
- cat
- platform: tensorflow
confidence: 90
scan_interval: 3
source:
- entity_id: camera.front_door_static
name: tensorflow.front_door
file_out:
- "/srv/homeassistant/.homeassistant/www/front_door_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- platform: tensorflow
confidence: 95
scan_interval: 5
source:
- entity_id: camera.back_yard_static
name: tensorflow.back_yard
file_out:
- "/srv/homeassistant/.homeassistant/www/back_yard_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
- dog
- platform: tensorflow
confidence: 95
scan_interval: 4
source:
- entity_id: camera.front_yard_static
name: tensorflow.front_yard
file_out:
- "/srv/homeassistant/.homeassistant/www/front_yard_latest.jpg"
model:
graph: /srv/homeassistant/.homeassistant/tensorflow/current_model/frozen_inference_graph.pb
categories:
- person
```
## Traceback/Error logs
<!--
If you come across any trace or error logs, please provide them.
-->
```txt
Mar 18 16:14:19 ha hass[3014]: 2020-03-18 16:14:19 ERROR (MainThread) [homeassistant.config] Platform error: image_processing
Mar 18 16:14:19 ha hass[3014]: Traceback (most recent call last):
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/config.py", line 752, in async_process_component_config
Mar 18 16:14:19 ha hass[3014]: platform = p_integration.get_platform(domain)
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/loader.py", line 277, in get_platform
Mar 18 16:14:19 ha hass[3014]: f"{self.pkg_path}.{platform_name}"
Mar 18 16:14:19 ha hass[3014]: File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
Mar 18 16:14:19 ha hass[3014]: return _bootstrap._gcd_import(name[level:], package, level)
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 983, in _find_and_load
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap_external>", line 728, in exec_module
Mar 18 16:14:19 ha hass[3014]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
Mar 18 16:14:19 ha hass[3014]: File "/srv/homeassistant/venv/lib/python3.7/site-packages/homeassistant/components/tensorflow/image_processing.py", line 7, in <module>
Mar 18 16:14:19 ha hass[3014]: from PIL import Image, ImageDraw, UnidentifiedImageError
Mar 18 16:14:19 ha hass[3014]: ImportError: cannot import name 'UnidentifiedImageError' from 'PIL' (/srv/homeassistant/venv/lib/python3.7/site-packages/PIL/__init__.py)
```
## Additional information
I did verify that Pillow v7.0.0 was loaded
|
process
|
importerror cannot import name unidentifiedimageerror from pil after upgrade to read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened instead seeing image processing errors in error log since upgrade to importerror cannot import name unidentifiedimageerror from pil environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend developer tools info arch dev false docker false hassio false os name linux os version generic python version timezone america chicago version virtualenv true home assistant release with the issue last working home assistant release if known operating environment hass io docker windows etc ubuntu integration causing this issue image processing link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml platform tensorflow confidence scan interval source entity id camera carport static name tensorflow carport file out srv homeassistant homeassistant www carport latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb categories person dog cat platform tensorflow confidence scan interval source entity id camera master bath door static name tensorflow master bath door file out srv homeassistant homeassistant www master bath door latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb categories person platform tensorflow confidence scan interval source entity id camera drive gate static name tensorflow drive gate file out srv homeassistant homeassistant www drive gate latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb area top categories person car truck motorcycle dog cat platform tensorflow confidence scan interval source entity id camera front door static name tensorflow front door file out srv homeassistant homeassistant www front door latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb categories person dog platform tensorflow confidence scan interval source entity id camera back yard static name tensorflow back yard file out srv homeassistant homeassistant www back yard latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb categories person dog platform tensorflow confidence scan interval source entity id camera front yard static name tensorflow front yard file out srv homeassistant homeassistant www front yard latest jpg model graph srv homeassistant homeassistant tensorflow current model frozen inference graph pb categories person traceback error logs if you come across any trace or error logs please provide them txt mar ha hass error mainthread platform error image processing mar ha hass traceback most recent call last mar ha hass file srv homeassistant venv lib site packages homeassistant config py line in async process component config mar ha hass platform p integration get platform domain mar ha hass file srv homeassistant venv lib site packages homeassistant loader py line in get platform mar ha hass f self pkg path platform name mar ha hass file usr lib importlib init py line in import module mar ha hass return bootstrap gcd import name package level mar ha hass file line in gcd import mar ha hass file line in find and load mar ha hass file line in find and load unlocked mar ha hass file line in load unlocked mar ha hass file line in exec module mar ha hass file line in call with frames removed mar ha hass file srv homeassistant venv lib site packages homeassistant components tensorflow image processing py line in mar ha hass from pil import image imagedraw unidentifiedimageerror mar ha hass importerror cannot import name unidentifiedimageerror from pil srv homeassistant venv lib site packages pil init py additional information i did verify that pillow was loaded
| 1
|
15,836
| 20,022,585,878
|
IssuesEvent
|
2022-02-01 17:45:36
|
EKGF/ekg-mm
|
https://api.github.com/repos/EKGF/ekg-mm
|
closed
|
Update the doc to explain the use of LaTex.
|
ekg-mm-process
|
"LaTeX, which is pronounced «Lah-tech» or «Lay-tech» (to rhyme with «blech» or «Bertolt Brecht»), is a document preparation system for high-quality typesetting. It is most often used for medium-to-large technical or scientific documents but it can be used for almost any form of publishing.
LaTeX is not a word processor! Instead, LaTeX encourages authors not to worry too much about the appearance of their documents but to concentrate on getting the right content. For example, consider this document:" https://www.latex-project.org/about/ EKGF uses LaTex as part of its automated publishing process.
|
1.0
|
Update the doc to explain the use of LaTex. - "LaTeX, which is pronounced «Lah-tech» or «Lay-tech» (to rhyme with «blech» or «Bertolt Brecht»), is a document preparation system for high-quality typesetting. It is most often used for medium-to-large technical or scientific documents but it can be used for almost any form of publishing.
LaTeX is not a word processor! Instead, LaTeX encourages authors not to worry too much about the appearance of their documents but to concentrate on getting the right content. For example, consider this document:" https://www.latex-project.org/about/ EKGF uses LaTex as part of its automated publishing process.
|
process
|
update the doc to explain the use of latex latex which is pronounced «lah tech» or «lay tech» to rhyme with «blech» or «bertolt brecht» is a document preparation system for high quality typesetting it is most often used for medium to large technical or scientific documents but it can be used for almost any form of publishing latex is not a word processor instead latex encourages authors not to worry too much about the appearance of their documents but to concentrate on getting the right content for example consider this document ekgf uses latex as part of its automated publishing process
| 1
|
182,772
| 21,676,489,569
|
IssuesEvent
|
2022-05-08 19:55:17
|
yoswein/spring-security
|
https://api.github.com/repos/yoswein/spring-security
|
opened
|
rsocket-core-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rsocket-core-1.1.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /test/spring-security-test.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/yoswein/spring-security/commit/3fcb26b5887fc226b006a838ede289b6cae3e3c7">3fcb26b5887fc226b006a838ede289b6cae3e3c7</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-24823](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | netty-common-4.1.64.Final.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-24823</summary>
### Vulnerable Library - <b>netty-common-4.1.64.Final.jar</b></p>
<p></p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /config/spring-security-config.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar</p>
<p>
Dependency Hierarchy:
- rsocket-core-1.1.1.jar (Root Library)
- netty-buffer-4.1.64.Final.jar
- :x: **netty-common-4.1.64.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yoswein/spring-security/commit/3fcb26b5887fc226b006a838ede289b6cae3e3c7">3fcb26b5887fc226b006a838ede289b6cae3e3c7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Netty is an open-source, asynchronous event-driven network application framework. The package `io.netty:netty-codec-http` prior to version 4.1.77.Final contains an insufficient fix for CVE-2021-21290. When Netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. This only impacts applications running on Java version 6 and lower. Additionally, this vulnerability impacts code running on Unix-like systems, and very old versions of Mac OSX and Windows as they all share the system temporary directory between all users. Version 4.1.77.Final contains a patch for this vulnerability. As a workaround, specify one's own `java.io.tmpdir` when starting the JVM or use DefaultHttpDataFactory.setBaseDir(...) to set the directory to something that is only readable by the current user.
<p>Publish Date: 2022-05-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823>CVE-2022-24823</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24823">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24823</a></p>
<p>Release Date: 2022-05-06</p>
<p>Fix Resolution: io.netty:netty-all;io.netty:netty-common - 4.1.77.Final</p>
</p>
<p></p>
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-common","packageVersion":"4.1.64.Final","packageFilePaths":["/config/spring-security-config.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.rsocket:rsocket-core:1.1.1;io.netty:netty-buffer:4.1.64.Final;io.netty:netty-common:4.1.64.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all;io.netty:netty-common - 4.1.77.Final","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-24823","vulnerabilityDetails":"Netty is an open-source, asynchronous event-driven network application framework. The package `io.netty:netty-codec-http` prior to version 4.1.77.Final contains an insufficient fix for CVE-2021-21290. When Netty\u0027s multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. This only impacts applications running on Java version 6 and lower. Additionally, this vulnerability impacts code running on Unix-like systems, and very old versions of Mac OSX and Windows as they all share the system temporary directory between all users. Version 4.1.77.Final contains a patch for this vulnerability. As a workaround, specify one\u0027s own `java.io.tmpdir` when starting the JVM or use DefaultHttpDataFactory.setBaseDir(...) to set the directory to something that is only readable by the current user.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
True
|
rsocket-core-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rsocket-core-1.1.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /test/spring-security-test.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/yoswein/spring-security/commit/3fcb26b5887fc226b006a838ede289b6cae3e3c7">3fcb26b5887fc226b006a838ede289b6cae3e3c7</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-24823](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | netty-common-4.1.64.Final.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-24823</summary>
### Vulnerable Library - <b>netty-common-4.1.64.Final.jar</b></p>
<p></p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /config/spring-security-config.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.64.Final/ac71ac92f9181516ce889880501e0ccbde319edc/netty-common-4.1.64.Final.jar</p>
<p>
Dependency Hierarchy:
- rsocket-core-1.1.1.jar (Root Library)
- netty-buffer-4.1.64.Final.jar
- :x: **netty-common-4.1.64.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yoswein/spring-security/commit/3fcb26b5887fc226b006a838ede289b6cae3e3c7">3fcb26b5887fc226b006a838ede289b6cae3e3c7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Netty is an open-source, asynchronous event-driven network application framework. The package `io.netty:netty-codec-http` prior to version 4.1.77.Final contains an insufficient fix for CVE-2021-21290. When Netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. This only impacts applications running on Java version 6 and lower. Additionally, this vulnerability impacts code running on Unix-like systems, and very old versions of Mac OSX and Windows as they all share the system temporary directory between all users. Version 4.1.77.Final contains a patch for this vulnerability. As a workaround, specify one's own `java.io.tmpdir` when starting the JVM or use DefaultHttpDataFactory.setBaseDir(...) to set the directory to something that is only readable by the current user.
<p>Publish Date: 2022-05-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823>CVE-2022-24823</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24823">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24823</a></p>
<p>Release Date: 2022-05-06</p>
<p>Fix Resolution: io.netty:netty-all;io.netty:netty-common - 4.1.77.Final</p>
</p>
<p></p>
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-common","packageVersion":"4.1.64.Final","packageFilePaths":["/config/spring-security-config.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.rsocket:rsocket-core:1.1.1;io.netty:netty-buffer:4.1.64.Final;io.netty:netty-common:4.1.64.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all;io.netty:netty-common - 4.1.77.Final","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-24823","vulnerabilityDetails":"Netty is an open-source, asynchronous event-driven network application framework. The package `io.netty:netty-codec-http` prior to version 4.1.77.Final contains an insufficient fix for CVE-2021-21290. When Netty\u0027s multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. This only impacts applications running on Java version 6 and lower. Additionally, this vulnerability impacts code running on Unix-like systems, and very old versions of Mac OSX and Windows as they all share the system temporary directory between all users. Version 4.1.77.Final contains a patch for this vulnerability. As a workaround, specify one\u0027s own `java.io.tmpdir` when starting the JVM or use DefaultHttpDataFactory.setBaseDir(...) to set the directory to something that is only readable by the current user.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24823","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
non_process
|
rsocket core jar vulnerabilities highest severity is vulnerable library rsocket core jar path to dependency file test spring security test gradle path to vulnerable library home wss scanner gradle caches modules files io netty netty common final netty common final jar home wss scanner gradle caches modules files io netty netty common final netty common final jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium netty common final jar transitive n a details cve vulnerable library netty common final jar library home page a href path to dependency file config spring security config gradle path to vulnerable library home wss scanner gradle caches modules files io netty netty common final netty common final jar home wss scanner gradle caches modules files io netty netty common final netty common final jar dependency hierarchy rsocket core jar root library netty buffer final jar x netty common final jar vulnerable library found in head commit a href found in base branch main vulnerability details netty is an open source asynchronous event driven network application framework the package io netty netty codec http prior to version final contains an insufficient fix for cve when netty s multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled this only impacts applications running on java version and lower additionally this vulnerability impacts code running on unix like systems and very old versions of mac osx and windows as they all share the system temporary directory between all users version final contains a patch for this vulnerability as a workaround specify one s own java io tmpdir when starting the jvm or use defaulthttpdatafactory setbasedir to set the directory to something that is only readable by the current user publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all io netty netty common final istransitivedependency true dependencytree io rsocket rsocket core io netty netty buffer final io netty netty common final isminimumfixversionavailable true minimumfixversion io netty netty all io netty netty common final isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails netty is an open source asynchronous event driven network application framework the package io netty netty codec http prior to version final contains an insufficient fix for cve when netty multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled this only impacts applications running on java version and lower additionally this vulnerability impacts code running on unix like systems and very old versions of mac osx and windows as they all share the system temporary directory between all users version final contains a patch for this vulnerability as a workaround specify one own java io tmpdir when starting the jvm or use defaulthttpdatafactory setbasedir to set the directory to something that is only readable by the current user vulnerabilityurl
| 0
|
7,291
| 10,439,820,438
|
IssuesEvent
|
2019-09-18 07:19:46
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
search: archived/ active number bug
|
2.0.6 Fixed Process bug bug
|
filter by archived/ active bug.
active:


archived:

|
1.0
|
search: archived/ active number bug - filter by archived/ active bug.
active:


archived:

|
process
|
search archived active number bug filter by archived active bug active archived
| 1
|
1,111
| 3,588,388,532
|
IssuesEvent
|
2016-01-31 00:05:20
|
osresearch/vst
|
https://api.github.com/repos/osresearch/vst
|
closed
|
Processing library should sort vectors
|
processing
|
The processing library doesn't sort the vectors, so it becomes slow on a vectrex.
|
1.0
|
Processing library should sort vectors - The processing library doesn't sort the vectors, so it becomes slow on a vectrex.
|
process
|
processing library should sort vectors the processing library doesn t sort the vectors so it becomes slow on a vectrex
| 1
|
591
| 3,067,146,458
|
IssuesEvent
|
2015-08-18 08:36:41
|
maraujop/django-crispy-forms
|
https://api.github.com/repos/maraujop/django-crispy-forms
|
closed
|
Fix 8x run time for tests on Django 1.8
|
Testing/Process
|
#455 introduced **a bit of a slow down** running against Django 1.8...
<img width="1200" alt="8x slowdown on Django 1.8" src="https://cloud.githubusercontent.com/assets/64686/9290392/90fa946a-4391-11e5-85a3-d946d852236a.png">
Fix this!
|
1.0
|
Fix 8x run time for tests on Django 1.8 - #455 introduced **a bit of a slow down** running against Django 1.8...
<img width="1200" alt="8x slowdown on Django 1.8" src="https://cloud.githubusercontent.com/assets/64686/9290392/90fa946a-4391-11e5-85a3-d946d852236a.png">
Fix this!
|
process
|
fix run time for tests on django introduced a bit of a slow down running against django img width alt slowdown on django src fix this
| 1
|
177,063
| 28,315,036,140
|
IssuesEvent
|
2023-04-10 18:49:08
|
microsoft/pyright
|
https://api.github.com/repos/microsoft/pyright
|
closed
|
Recognise truthy and falsy values as such
|
enhancement request as designed
|
**Is your feature request related to a problem? Please describe.**
Currently, for the type checker only `True` and [a few more values](https://github.com/microsoft/pyright/issues/445#issuecomment-568177746) are considered truthy.
```python
# Return type inferred as `Literal[True]`.
def f():
if True:
return True
# Code is unreachable.
return False
# Return type inferred as `bool`.
def g():
if 1:
return True
return False
# Even though `f` directly returns `Literal[True]`,
# the return type here is still inferred as `bool`.
def h():
if f():
return True
return False
```
When `True` is used in a `while` condition, the type checker knows that that `True` does not provide an exit condition, which is very useful.
```python
# Return type inferred as `Literal[True]`.
def f():
while True:
if True:
return True
# Code is unreachable.
return False
# Return type inferred as `Literal[True] | None`
def g():
while f():
if True:
return True
```
Annotating the return type like so `def g() -> bool:` would make the type checker scream:
> Function with declared type of "bool" must return value on all code paths
Type "None" cannot be assigned to type "bool"
**Describe the solution you'd like**
It would be great for the type checker to recognise [truthy and falsy values](https://docs.python.org/3/library/stdtypes.html#truth-value-testing) other than `True` and `False`, even when they are used as a funciton returned value rather than directly.
**Additional context**
To directly address the [discussion](https://github.com/microsoft/pyright/issues/445#issuecomment-568177746) mentioned at the beginning, one common enough use case may be a function returning some information to be validated in a loop. Here is a simple example:
```python
from math import nan
def get_two_integers() -> tuple[int, int]:
return 0, 1
def divide_two_integers(n: int, m: int):
if m == 0:
return nan
return n / m
# Return type inferred as `float | None`,
# even though the `operands` tuple is a truthy value
# and the only way to leave the loop is to return a `float`.
def dividetor():
while operands := get_two_integers():
if operands[1]:
return divide_two_integers(*operands)
print("Cannot divide by 0.")
```
If anything, this issue will serve as a reference for people looking for the same feature.
|
1.0
|
Recognise truthy and falsy values as such - **Is your feature request related to a problem? Please describe.**
Currently, for the type checker only `True` and [a few more values](https://github.com/microsoft/pyright/issues/445#issuecomment-568177746) are considered truthy.
```python
# Return type inferred as `Literal[True]`.
def f():
if True:
return True
# Code is unreachable.
return False
# Return type inferred as `bool`.
def g():
if 1:
return True
return False
# Even though `f` directly returns `Literal[True]`,
# the return type here is still inferred as `bool`.
def h():
if f():
return True
return False
```
When `True` is used in a `while` condition, the type checker knows that that `True` does not provide an exit condition, which is very useful.
```python
# Return type inferred as `Literal[True]`.
def f():
while True:
if True:
return True
# Code is unreachable.
return False
# Return type inferred as `Literal[True] | None`
def g():
while f():
if True:
return True
```
Annotating the return type like so `def g() -> bool:` would make the type checker scream:
> Function with declared type of "bool" must return value on all code paths
Type "None" cannot be assigned to type "bool"
**Describe the solution you'd like**
It would be great for the type checker to recognise [truthy and falsy values](https://docs.python.org/3/library/stdtypes.html#truth-value-testing) other than `True` and `False`, even when they are used as a funciton returned value rather than directly.
**Additional context**
To directly address the [discussion](https://github.com/microsoft/pyright/issues/445#issuecomment-568177746) mentioned at the beginning, one common enough use case may be a function returning some information to be validated in a loop. Here is a simple example:
```python
from math import nan
def get_two_integers() -> tuple[int, int]:
return 0, 1
def divide_two_integers(n: int, m: int):
if m == 0:
return nan
return n / m
# Return type inferred as `float | None`,
# even though the `operands` tuple is a truthy value
# and the only way to leave the loop is to return a `float`.
def dividetor():
while operands := get_two_integers():
if operands[1]:
return divide_two_integers(*operands)
print("Cannot divide by 0.")
```
If anything, this issue will serve as a reference for people looking for the same feature.
|
non_process
|
recognise truthy and falsy values as such is your feature request related to a problem please describe currently for the type checker only true and are considered truthy python return type inferred as literal def f if true return true code is unreachable return false return type inferred as bool def g if return true return false even though f directly returns literal the return type here is still inferred as bool def h if f return true return false when true is used in a while condition the type checker knows that that true does not provide an exit condition which is very useful python return type inferred as literal def f while true if true return true code is unreachable return false return type inferred as literal none def g while f if true return true annotating the return type like so def g bool would make the type checker scream function with declared type of bool must return value on all code paths type none cannot be assigned to type bool describe the solution you d like it would be great for the type checker to recognise other than true and false even when they are used as a funciton returned value rather than directly additional context to directly address the mentioned at the beginning one common enough use case may be a function returning some information to be validated in a loop here is a simple example python from math import nan def get two integers tuple return def divide two integers n int m int if m return nan return n m return type inferred as float none even though the operands tuple is a truthy value and the only way to leave the loop is to return a float def dividetor while operands get two integers if operands return divide two integers operands print cannot divide by if anything this issue will serve as a reference for people looking for the same feature
| 0
|
585,146
| 17,480,761,401
|
IssuesEvent
|
2021-08-09 01:30:43
|
PyTorchLightning/pytorch-lightning
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
|
closed
|
Best model checkpoints only save rank zero part of the pipe paralleled model.
|
bug / fix help wanted won't fix Priority P1
|
## 🐛 Best model checkpoints only save rank zero part of the pipe paralleled model.
When saving the best checkpoint of the piped parallel model in the relevant module, only a part of the model is saved. The method of storing rank zero is suitable for the method of DP, but there is an issue in the MP.
I confirmed that the code below works well.
https://github.com/haven-jeon/pytorch-lightning/commit/9ebe8c031ded47cb31aee92fe3b64c797d11dfa4#diff-05daced8d811d9d4566136e3db8c0c35dac3312e6ca6ee6ba0ef7678fa16d051
## Please reproduce using the BoringModel
### To Reproduce
Use following [**BoringModel**](https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing) and post here
<!-- If you could not reproduce using the BoringModel and still think there's a bug, please post here -->
### Expected behavior
<!-- FILL IN -->
### Environment
**Note**: `Bugs with code` are solved faster ! `Colab Notebook` should be made `public` !
* `IDE`: Please, use our python [bug_report_model.py](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report_model.py
) template.
* `Colab Notebook`: Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py) (or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
```
- PyTorch Version (e.g., 1.0): 1.6.0
- OS (e.g., Linux): Centos 7
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: 10.2
- GPU models and configuration: P40
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
|
1.0
|
Best model checkpoints only save rank zero part of the pipe paralleled model. - ## 🐛 Best model checkpoints only save rank zero part of the pipe paralleled model.
When saving the best checkpoint of the piped parallel model in the relevant module, only a part of the model is saved. The method of storing rank zero is suitable for the method of DP, but there is an issue in the MP.
I confirmed that the code below works well.
https://github.com/haven-jeon/pytorch-lightning/commit/9ebe8c031ded47cb31aee92fe3b64c797d11dfa4#diff-05daced8d811d9d4566136e3db8c0c35dac3312e6ca6ee6ba0ef7678fa16d051
## Please reproduce using the BoringModel
### To Reproduce
Use following [**BoringModel**](https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing) and post here
<!-- If you could not reproduce using the BoringModel and still think there's a bug, please post here -->
### Expected behavior
<!-- FILL IN -->
### Environment
**Note**: `Bugs with code` are solved faster ! `Colab Notebook` should be made `public` !
* `IDE`: Please, use our python [bug_report_model.py](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report_model.py
) template.
* `Colab Notebook`: Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py) (or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
```
- PyTorch Version (e.g., 1.0): 1.6.0
- OS (e.g., Linux): Centos 7
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: 10.2
- GPU models and configuration: P40
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
|
non_process
|
best model checkpoints only save rank zero part of the pipe paralleled model 🐛 best model checkpoints only save rank zero part of the pipe paralleled model when saving the best checkpoint of the piped parallel model in the relevant module only a part of the model is saved the method of storing rank zero is suitable for the method of dp but there is an issue in the mp i confirmed that the code below works well please reproduce using the boringmodel to reproduce use following and post here expected behavior environment note bugs with code are solved faster colab notebook should be made public ide please use our python template colab notebook please copy and paste the output from our or fill out the checklist below manually you can get the script and run it with wget for security purposes please check the contents of collect env details py before running it python collect env details py pytorch version e g os e g linux centos how you installed pytorch conda pip source pip build command you used if compiling from source python version cuda cudnn version gpu models and configuration any other relevant information additional context
| 0
|
193,610
| 15,382,346,241
|
IssuesEvent
|
2021-03-03 00:26:54
|
ntrappe/vanilla_pda
|
https://api.github.com/repos/ntrappe/vanilla_pda
|
opened
|
ADRs
|
Normal Feature documentation
|
- [ ] ADR for Lightbox vs sidebar for testing
- [ ] ADR for appbar being at the bottom
- [ ] when testing step-by-step, automate Lightbox closing and whiteboard animations
|
1.0
|
ADRs - - [ ] ADR for Lightbox vs sidebar for testing
- [ ] ADR for appbar being at the bottom
- [ ] when testing step-by-step, automate Lightbox closing and whiteboard animations
|
non_process
|
adrs adr for lightbox vs sidebar for testing adr for appbar being at the bottom when testing step by step automate lightbox closing and whiteboard animations
| 0
|
128,662
| 27,301,838,269
|
IssuesEvent
|
2023-02-24 03:07:10
|
microsoft/devicescript
|
https://api.github.com/repos/microsoft/devicescript
|
closed
|
cli should start build immediately
|
vscode
|
- build sources
- build services
- build boards?
Currently the build only starts with debug
|
1.0
|
cli should start build immediately - - build sources
- build services
- build boards?
Currently the build only starts with debug
|
non_process
|
cli should start build immediately build sources build services build boards currently the build only starts with debug
| 0
|
752
| 2,869,063,311
|
IssuesEvent
|
2015-06-05 23:02:08
|
ualbertalib/HydraNorth
|
https://api.github.com/repos/ualbertalib/HydraNorth
|
opened
|
OWASP V4.3
|
team:security
|
Verify that users can only access secured data files for which they possess specific authorization.
HydraNorth notes:
Blacklight notes:
Testing notes:
|
True
|
OWASP V4.3 - Verify that users can only access secured data files for which they possess specific authorization.
HydraNorth notes:
Blacklight notes:
Testing notes:
|
non_process
|
owasp verify that users can only access secured data files for which they possess specific authorization hydranorth notes blacklight notes testing notes
| 0
|
330,731
| 24,275,044,739
|
IssuesEvent
|
2022-09-28 13:18:36
|
COS301-SE-2022/Conversation-Catcher
|
https://api.github.com/repos/COS301-SE-2022/Conversation-Catcher
|
closed
|
Add code quality badge for hosting using UptimeRobot
|
priority:medium scope:cicd status:needs-info type:documentation
|
Once hosting is set up, a badge can be added to the README that links to the URL where it is hosted.
|
1.0
|
Add code quality badge for hosting using UptimeRobot - Once hosting is set up, a badge can be added to the README that links to the URL where it is hosted.
|
non_process
|
add code quality badge for hosting using uptimerobot once hosting is set up a badge can be added to the readme that links to the url where it is hosted
| 0
|
225,592
| 7,488,937,980
|
IssuesEvent
|
2018-04-06 04:45:51
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
cli: oadm create-node-config needs more descriptive error messages
|
area/usability component/cli kind/enhancement lifecycle/rotten priority/P3
|
the oadm create-node-config command will display errors like:
```
error: open openshift.local.config/master/ca.crt: no such file or directory
```
however it doesn't say which argument is missing. It could be any one of:
```
--certificate-authority
--node-client-certificate-authority
--signer-cert
```
|
1.0
|
cli: oadm create-node-config needs more descriptive error messages - the oadm create-node-config command will display errors like:
```
error: open openshift.local.config/master/ca.crt: no such file or directory
```
however it doesn't say which argument is missing. It could be any one of:
```
--certificate-authority
--node-client-certificate-authority
--signer-cert
```
|
non_process
|
cli oadm create node config needs more descriptive error messages the oadm create node config command will display errors like error open openshift local config master ca crt no such file or directory however it doesn t say which argument is missing it could be any one of certificate authority node client certificate authority signer cert
| 0
|
7,443
| 10,554,977,819
|
IssuesEvent
|
2019-10-03 20:43:53
|
pelias/api
|
https://api.github.com/repos/pelias/api
|
closed
|
/search doesn't return name fields that may have been matched by query
|
glorious future processed
|
Although we match on all name.\* fields only name.default is returned to the user and displayed in the front-end.
in effect this means that a record like [1] which has the following tags will be searchable via the russian name but will return the english name to the user:
{
'name': 'Trafalgar Square',
'name:ru': 'Трафальгарская площадь'
}
the above is not too big a deal as we are providing an english-only service at the moment; there are also examples like the mayor's office in london [2] with the following tags:
{
'name': '30 St Mary Axe',
'loc_name': 'The Gherkin'
}
the building itself it called '30 St Mary Axe' but absolutely everyone affectionately refers to it as 'The Gherkin' [3], so if you searched for 'The Gherkin', your result would match and return '30 St Mary Axe'.
/suggest already has the same behaviour here: https://pelias.mapzen.com/suggest?input=The%20Gherkin&lat=51.53177&lon=-0.06672&size=10&zoom=18
[1] https://www.openstreetmap.org/relation/3962877
[2] https://www.openstreetmap.org/way/4959489
[3] gherkin
|
1.0
|
/search doesn't return name fields that may have been matched by query - Although we match on all name.\* fields only name.default is returned to the user and displayed in the front-end.
in effect this means that a record like [1] which has the following tags will be searchable via the russian name but will return the english name to the user:
{
'name': 'Trafalgar Square',
'name:ru': 'Трафальгарская площадь'
}
the above is not too big a deal as we are providing an english-only service at the moment; there are also examples like the mayor's office in london [2] with the following tags:
{
'name': '30 St Mary Axe',
'loc_name': 'The Gherkin'
}
the building itself it called '30 St Mary Axe' but absolutely everyone affectionately refers to it as 'The Gherkin' [3], so if you searched for 'The Gherkin', your result would match and return '30 St Mary Axe'.
/suggest already has the same behaviour here: https://pelias.mapzen.com/suggest?input=The%20Gherkin&lat=51.53177&lon=-0.06672&size=10&zoom=18
[1] https://www.openstreetmap.org/relation/3962877
[2] https://www.openstreetmap.org/way/4959489
[3] gherkin
|
process
|
search doesn t return name fields that may have been matched by query although we match on all name fields only name default is returned to the user and displayed in the front end in effect this means that a record like which has the following tags will be searchable via the russian name but will return the english name to the user name trafalgar square name ru трафальгарская площадь the above is not too big a deal as we are providing an english only service at the moment there are also examples like the mayor s office in london with the following tags name st mary axe loc name the gherkin the building itself it called st mary axe but absolutely everyone affectionately refers to it as the gherkin so if you searched for the gherkin your result would match and return st mary axe suggest already has the same behaviour here gherkin
| 1
|
143,900
| 5,532,419,761
|
IssuesEvent
|
2017-03-21 10:34:31
|
juju/docs
|
https://api.github.com/repos/juju/docs
|
closed
|
`charm build` should be `charm build .`
|
2.0 2.1 high priority
|
Moved from https://github.com/CanonicalLtd/jujucharms.com/issues/399
Regarding https://jujucharms.com/docs/2.0/authors-charm-building
See my bug here https://bugs.launchpad.net/ubuntu/+source/charm-tools/+bug/1666301
You get different results when running `charm build` and `charm build .`
|
1.0
|
`charm build` should be `charm build .` - Moved from https://github.com/CanonicalLtd/jujucharms.com/issues/399
Regarding https://jujucharms.com/docs/2.0/authors-charm-building
See my bug here https://bugs.launchpad.net/ubuntu/+source/charm-tools/+bug/1666301
You get different results when running `charm build` and `charm build .`
|
non_process
|
charm build should be charm build moved from regarding see my bug here you get different results when running charm build and charm build
| 0
|
830,969
| 32,032,986,908
|
IssuesEvent
|
2023-09-22 13:33:37
|
puyu-pe/puka
|
https://api.github.com/repos/puyu-pe/puka
|
closed
|
Implementar trayicon con funcionalidad de liberar cola de impresión y mostrar elementos en cola de impresión
|
point: 2 priority: highest type:feature
|
# **🚀 Feature Request**
## **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
* Se debe ofrecer una funcionalidad para mostrar el numero de elementos en cola de impresión
* Se debe tener una opcion para liberar cola de impresión
---
## **Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
*
---
## **Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
*
---
### **Additional context**
<!-- Add any other context or additional information about the problem here.-->
*
<!--📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
Oh, hi there! 😄
To expedite issue processing, please search open and closed issues before submitting a new one.
Please read our Rules of Conduct at this repository's `.github/CODE_OF_CONDUCT.md`
📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛-->
|
1.0
|
Implementar trayicon con funcionalidad de liberar cola de impresión y mostrar elementos en cola de impresión - # **🚀 Feature Request**
## **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
* Se debe ofrecer una funcionalidad para mostrar el numero de elementos en cola de impresión
* Se debe tener una opcion para liberar cola de impresión
---
## **Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
*
---
## **Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
*
---
### **Additional context**
<!-- Add any other context or additional information about the problem here.-->
*
<!--📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
Oh, hi there! 😄
To expedite issue processing, please search open and closed issues before submitting a new one.
Please read our Rules of Conduct at this repository's `.github/CODE_OF_CONDUCT.md`
📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛-->
|
non_process
|
implementar trayicon con funcionalidad de liberar cola de impresión y mostrar elementos en cola de impresión 🚀 feature request is your feature request related to a problem please describe se debe ofrecer una funcionalidad para mostrar el numero de elementos en cola de impresión se debe tener una opcion para liberar cola de impresión describe the solution you d like describe alternatives you ve considered additional context 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one please read our rules of conduct at this repository s github code of conduct md 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
| 0
|
53,983
| 13,891,026,790
|
IssuesEvent
|
2020-10-19 10:04:20
|
shaimael/Maven-Java-Example
|
https://api.github.com/repos/shaimael/Maven-Java-Example
|
opened
|
CVE-2017-11467 (High) detected in orientdb-core-2.1.9.jar
|
security vulnerability
|
## CVE-2017-11467 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>orientdb-core-2.1.9.jar</b></p></summary>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com">http://www.orientechnologies.com</a></p>
<p>Path to dependency file: Maven-Java-Example/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/orientechnologies/orientdb-core/2.1.9/orientdb-core-2.1.9.jar</p>
<p>
Dependency Hierarchy:
- orientdb-server-2.1.9.jar (Root Library)
- orientdb-client-2.1.9.jar
- orientdb-enterprise-2.1.9.jar
- :x: **orientdb-core-2.1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/shaimael/Maven-Java-Example/commits/45d147049c0f73132be85be6b29a4978139e1755">45d147049c0f73132be85be6b29a4978139e1755</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OrientDB through 2.2.22 does not enforce privilege requirements during "where" or "fetchplan" or "order by" use, which allows remote attackers to execute arbitrary OS commands via a crafted request.
<p>Publish Date: 2017-07-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11467>CVE-2017-11467</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11467">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11467</a></p>
<p>Release Date: 2017-07-20</p>
<p>Fix Resolution: 2.2.23</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.orientechnologies","packageName":"orientdb-core","packageVersion":"2.1.9","isTransitiveDependency":true,"dependencyTree":"com.orientechnologies:orientdb-server:2.1.9;com.orientechnologies:orientdb-client:2.1.9;com.orientechnologies:orientdb-enterprise:2.1.9;com.orientechnologies:orientdb-core:2.1.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.2.23"}],"vulnerabilityIdentifier":"CVE-2017-11467","vulnerabilityDetails":"OrientDB through 2.2.22 does not enforce privilege requirements during \"where\" or \"fetchplan\" or \"order by\" use, which allows remote attackers to execute arbitrary OS commands via a crafted request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11467","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-11467 (High) detected in orientdb-core-2.1.9.jar - ## CVE-2017-11467 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>orientdb-core-2.1.9.jar</b></p></summary>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com">http://www.orientechnologies.com</a></p>
<p>Path to dependency file: Maven-Java-Example/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/orientechnologies/orientdb-core/2.1.9/orientdb-core-2.1.9.jar</p>
<p>
Dependency Hierarchy:
- orientdb-server-2.1.9.jar (Root Library)
- orientdb-client-2.1.9.jar
- orientdb-enterprise-2.1.9.jar
- :x: **orientdb-core-2.1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/shaimael/Maven-Java-Example/commits/45d147049c0f73132be85be6b29a4978139e1755">45d147049c0f73132be85be6b29a4978139e1755</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OrientDB through 2.2.22 does not enforce privilege requirements during "where" or "fetchplan" or "order by" use, which allows remote attackers to execute arbitrary OS commands via a crafted request.
<p>Publish Date: 2017-07-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11467>CVE-2017-11467</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11467">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11467</a></p>
<p>Release Date: 2017-07-20</p>
<p>Fix Resolution: 2.2.23</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.orientechnologies","packageName":"orientdb-core","packageVersion":"2.1.9","isTransitiveDependency":true,"dependencyTree":"com.orientechnologies:orientdb-server:2.1.9;com.orientechnologies:orientdb-client:2.1.9;com.orientechnologies:orientdb-enterprise:2.1.9;com.orientechnologies:orientdb-core:2.1.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.2.23"}],"vulnerabilityIdentifier":"CVE-2017-11467","vulnerabilityDetails":"OrientDB through 2.2.22 does not enforce privilege requirements during \"where\" or \"fetchplan\" or \"order by\" use, which allows remote attackers to execute arbitrary OS commands via a crafted request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11467","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in orientdb core jar cve high severity vulnerability vulnerable library orientdb core jar orientdb nosql document graph dbms library home page a href path to dependency file maven java example pom xml path to vulnerable library home wss scanner repository com orientechnologies orientdb core orientdb core jar dependency hierarchy orientdb server jar root library orientdb client jar orientdb enterprise jar x orientdb core jar vulnerable library found in head commit a href found in base branch master vulnerability details orientdb through does not enforce privilege requirements during where or fetchplan or order by use which allows remote attackers to execute arbitrary os commands via a crafted request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails orientdb through does not enforce privilege requirements during where or fetchplan or order by use which allows remote attackers to execute arbitrary os commands via a crafted request vulnerabilityurl
| 0
|
413,512
| 27,956,115,486
|
IssuesEvent
|
2023-03-24 12:32:20
|
typescript-eslint/typescript-eslint
|
https://api.github.com/repos/typescript-eslint/typescript-eslint
|
opened
|
Docs: v6 blog post links to main (v5) docs
|
documentation accepting prs
|
### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
Thanks @reintroducing for mentioning this in https://github.com/typescript-eslint/typescript-eslint/discussions/6014#discussioncomment-5411631!
https://typescript-eslint.io/blog/announcing-typescript-eslint-v6-beta includes links to docs, but those links are all on the main deployment of the website - which is still v5. We should change those links to point to the v6 preview of the site.
### Affected URL(s)
https://typescript-eslint.io/blog/announcing-typescript-eslint-v6-beta
|
1.0
|
Docs: v6 blog post links to main (v5) docs - ### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
Thanks @reintroducing for mentioning this in https://github.com/typescript-eslint/typescript-eslint/discussions/6014#discussioncomment-5411631!
https://typescript-eslint.io/blog/announcing-typescript-eslint-v6-beta includes links to docs, but those links are all on the main deployment of the website - which is still v5. We should change those links to point to the v6 preview of the site.
### Affected URL(s)
https://typescript-eslint.io/blog/announcing-typescript-eslint-v6-beta
|
non_process
|
docs blog post links to main docs before you file a documentation request please confirm you have done the following i have looked for existing that match my proposal i have and my problem is not listed suggested changes thanks reintroducing for mentioning this in includes links to docs but those links are all on the main deployment of the website which is still we should change those links to point to the preview of the site affected url s
| 0
|
6,535
| 9,634,118,357
|
IssuesEvent
|
2019-05-15 20:24:25
|
peteroas/Apprenticeship-Curriculum
|
https://api.github.com/repos/peteroas/Apprenticeship-Curriculum
|
closed
|
Agile principles
|
process
|
Read the 12 Agile Principles here:
https://en.wikipedia.org/wiki/Agile_software_development#Agile_software_development_principles
Questions:
- What do you think?
- How do agile teams measure progress?
- How do agile teams work together?
- What are some ways in which the Revelry "Lean Agile" approach addresses the
12 Agile software development principles? Do you see any ways in which our
approach doesn't address the 12 principles?
|
1.0
|
Agile principles -
Read the 12 Agile Principles here:
https://en.wikipedia.org/wiki/Agile_software_development#Agile_software_development_principles
Questions:
- What do you think?
- How do agile teams measure progress?
- How do agile teams work together?
- What are some ways in which the Revelry "Lean Agile" approach addresses the
12 Agile software development principles? Do you see any ways in which our
approach doesn't address the 12 principles?
|
process
|
agile principles read the agile principles here questions what do you think how do agile teams measure progress how do agile teams work together what are some ways in which the revelry lean agile approach addresses the agile software development principles do you see any ways in which our approach doesn t address the principles
| 1
|
22,110
| 30,640,173,621
|
IssuesEvent
|
2023-07-24 21:11:11
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Release 6.4.0 - September 2023
|
P1 type: process release team-OSS
|
# Status of Bazel 6.4.0
- Expected first release candidate date: 2023-09-18
- Expected release date: 2023-09-25
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/57)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.4.0, simply send a PR against the `release-6.4.0` branch.
**Task list:**
- [ ] Create release candidate
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Push the release and notify package maintainers
- [ ] Update the documentation
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 6.4.0 - September 2023 - # Status of Bazel 6.4.0
- Expected first release candidate date: 2023-09-18
- Expected release date: 2023-09-25
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/57)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.4.0, simply send a PR against the `release-6.4.0` branch.
**Task list:**
- [ ] Create release candidate
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Push the release and notify package maintainers
- [ ] Update the documentation
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release september status of bazel expected first release candidate date expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create release candidate check downstream projects create push the release and notify package maintainers update the documentation update the
| 1
|
103,188
| 8,882,712,199
|
IssuesEvent
|
2019-01-14 13:59:05
|
dictation-toolbox/dragonfly
|
https://api.github.com/repos/dictation-toolbox/dragonfly
|
closed
|
Automatically load and write tests for other supported dragonfly languages
|
Enhancement Testing
|
I noticed there are integer and digit content implementations under [dragonfly.language.other](https://github.com/Danesprite/dragonfly/tree/master/dragonfly/language/other) for Arabic, Indonesian and Malaysian. I suspect these have been left here because neither Dragon NaturallySpeaking or Windows Speech Recognition supports them. Now that dragonfly supports using other speech recognition engines, these should be automatically loaded when the proper language code is used.
There are unit tests for the English, German and Dutch integer and digit classes. It should be simple enough to add similar tests for the other languages. The test engine (issue #36) should help with running these tests without a real speech recognition engine.
|
1.0
|
Automatically load and write tests for other supported dragonfly languages - I noticed there are integer and digit content implementations under [dragonfly.language.other](https://github.com/Danesprite/dragonfly/tree/master/dragonfly/language/other) for Arabic, Indonesian and Malaysian. I suspect these have been left here because neither Dragon NaturallySpeaking or Windows Speech Recognition supports them. Now that dragonfly supports using other speech recognition engines, these should be automatically loaded when the proper language code is used.
There are unit tests for the English, German and Dutch integer and digit classes. It should be simple enough to add similar tests for the other languages. The test engine (issue #36) should help with running these tests without a real speech recognition engine.
|
non_process
|
automatically load and write tests for other supported dragonfly languages i noticed there are integer and digit content implementations under for arabic indonesian and malaysian i suspect these have been left here because neither dragon naturallyspeaking or windows speech recognition supports them now that dragonfly supports using other speech recognition engines these should be automatically loaded when the proper language code is used there are unit tests for the english german and dutch integer and digit classes it should be simple enough to add similar tests for the other languages the test engine issue should help with running these tests without a real speech recognition engine
| 0
|
72,180
| 9,558,245,631
|
IssuesEvent
|
2019-05-03 13:46:50
|
iborzenkov/CryptoApisSdkLibrary
|
https://api.github.com/repos/iborzenkov/CryptoApisSdkLibrary
|
opened
|
#004 Server response does not match documentation
|
documentation
|
### End Point
Historical Data
### URL
https://docs.cryptoapis.io/#historical-data
### Description
There are no parameters (“index”, “limit”, “results”) in the server responses, but they are in the documentation.
### Details
The problem same as #2
|
1.0
|
#004 Server response does not match documentation - ### End Point
Historical Data
### URL
https://docs.cryptoapis.io/#historical-data
### Description
There are no parameters (“index”, “limit”, “results”) in the server responses, but they are in the documentation.
### Details
The problem same as #2
|
non_process
|
server response does not match documentation end point historical data url description there are no parameters “index” “limit” “results” in the server responses but they are in the documentation details the problem same as
| 0
|
262,750
| 8,272,432,571
|
IssuesEvent
|
2018-09-16 20:06:17
|
javaee/glassfish
|
https://api.github.com/repos/javaee/glassfish
|
closed
|
Include a version in the domain.xml
|
3_1-exclude Component: admin ERR: Assignee Priority: Major Type: New Feature
|
During upgrades and updates from prior releases it has been necessary
for the server to (1) determine the version of GF the domain.xml file is
supporting (e.g. v2.1, 3.0.1, etc) and (2) make adjustments to the contents
of the file in preparation to support the current version of GF.
We have employed various methods to determine the current version of
the domain.xml file such as looking for the absence of certain
required elements.
By adding a new "version" element or attribute which represents the
version of the file this could assist future versions of GF during
update and upgrade procedures. It would provide a reliable way to detect
the version of the file.
#### Environment
Generic
#### Affected Versions
[3.1]
|
1.0
|
Include a version in the domain.xml - During upgrades and updates from prior releases it has been necessary
for the server to (1) determine the version of GF the domain.xml file is
supporting (e.g. v2.1, 3.0.1, etc) and (2) make adjustments to the contents
of the file in preparation to support the current version of GF.
We have employed various methods to determine the current version of
the domain.xml file such as looking for the absence of certain
required elements.
By adding a new "version" element or attribute which represents the
version of the file this could assist future versions of GF during
update and upgrade procedures. It would provide a reliable way to detect
the version of the file.
#### Environment
Generic
#### Affected Versions
[3.1]
|
non_process
|
include a version in the domain xml during upgrades and updates from prior releases it has been necessary for the server to determine the version of gf the domain xml file is supporting e g etc and make adjustments to the contents of the file in preparation to support the current version of gf we have employed various methods to determine the current version of the domain xml file such as looking for the absence of certain required elements by adding a new version element or attribute which represents the version of the file this could assist future versions of gf during update and upgrade procedures it would provide a reliable way to detect the version of the file environment generic affected versions
| 0
|
1,500
| 4,076,178,109
|
IssuesEvent
|
2016-05-29 18:38:36
|
mincong-h/gsoc-hsearch
|
https://api.github.com/repos/mincong-h/gsoc-hsearch
|
closed
|
Parallel processing for IdProducerBatchlet
|
parallel processing question wontfix
|
Since commit 14a638a, the `IdProducerBatchlet` can load identifiers of all entities. However, the process is relatively slow, because it is running in a single thread. Can we apply some technologies to accelerate the loading process, e.g. using parallel processing ?
|
1.0
|
Parallel processing for IdProducerBatchlet - Since commit 14a638a, the `IdProducerBatchlet` can load identifiers of all entities. However, the process is relatively slow, because it is running in a single thread. Can we apply some technologies to accelerate the loading process, e.g. using parallel processing ?
|
process
|
parallel processing for idproducerbatchlet since commit the idproducerbatchlet can load identifiers of all entities however the process is relatively slow because it is running in a single thread can we apply some technologies to accelerate the loading process e g using parallel processing
| 1
|
100,118
| 30,621,308,212
|
IssuesEvent
|
2023-07-24 08:28:04
|
sandboxie-plus/Sandboxie
|
https://api.github.com/repos/sandboxie-plus/Sandboxie
|
closed
|
[1.0.22] OpenClipboard=n does not block clipboard if the program is "Forced Running"
|
fixed in next build Workaround Issue reproduced
|
### What happened?
Title says, here are the scenarios that were tested.
❌ A program executed from ```ForceFolder``` can still access the clipboard.
❌ A program executed via ```ForceProcess``` can still access the clipboard.
✔ Manually running the program via context menu and selecting the sandbox with OpenClipboard=n set works properly, it cannot access the clipboard.
I will be using ```DefaultBox (Hardened)``` with ```OpenClipboard=n``` and ```ForceFolder=D:\Downloads``` for testing.
See the following images for the scenarios
[SCENARIO 1 - Forced Running]
1. Copy a random text.

2. Directly execute the program on the forced location "D:\Downloads"


3. Verify if the status if Forced Running

4. Paste the copied text from step 1.

[SCENARIO 2 - Running]
1. Copy a random text.

2. Execute the program via the Sandboxie Context Menu


3. Verify if the status if Running

4. Paste the copied text from step 1.

### Download link
N/A
### To Reproduce
_No response_
### Expected behavior
Clipboard access must be blockd
### What is your Windows edition and version?
Windows 10 Pro Education 21H2 x64 (19044.1706)
### In which Windows account you have this problem?
A local or Microsoft account without special changes.
### Please mention any installed security software
Symantec Endpoint Protection 14.3
### What version of Sandboxie are you running?
1.0.22
### Is it a regression?
_No response_
### List of affected browsers
_No response_
### In which sandbox type you have this problem?
Not relevant to my request.
### Where is the program located?
Not relevant to my request.
### Can you reproduce this problem on an empty sandbox?
Not relevant to my request.
### Did you previously enable some security policy settings outside Sandboxie?
_No response_
### Crash dump
_No response_
### Trace log
_No response_
### Sandboxie.ini configuration
_No response_
|
1.0
|
[1.0.22] OpenClipboard=n does not block clipboard if the program is "Forced Running" - ### What happened?
Title says, here are the scenarios that were tested.
❌ A program executed from ```ForceFolder``` can still access the clipboard.
❌ A program executed via ```ForceProcess``` can still access the clipboard.
✔ Manually running the program via context menu and selecting the sandbox with OpenClipboard=n set works properly, it cannot access the clipboard.
I will be using ```DefaultBox (Hardened)``` with ```OpenClipboard=n``` and ```ForceFolder=D:\Downloads``` for testing.
See the following images for the scenarios
[SCENARIO 1 - Forced Running]
1. Copy a random text.

2. Directly execute the program on the forced location "D:\Downloads"


3. Verify if the status if Forced Running

4. Paste the copied text from step 1.

[SCENARIO 2 - Running]
1. Copy a random text.

2. Execute the program via the Sandboxie Context Menu


3. Verify if the status if Running

4. Paste the copied text from step 1.

### Download link
N/A
### To Reproduce
_No response_
### Expected behavior
Clipboard access must be blockd
### What is your Windows edition and version?
Windows 10 Pro Education 21H2 x64 (19044.1706)
### In which Windows account you have this problem?
A local or Microsoft account without special changes.
### Please mention any installed security software
Symantec Endpoint Protection 14.3
### What version of Sandboxie are you running?
1.0.22
### Is it a regression?
_No response_
### List of affected browsers
_No response_
### In which sandbox type you have this problem?
Not relevant to my request.
### Where is the program located?
Not relevant to my request.
### Can you reproduce this problem on an empty sandbox?
Not relevant to my request.
### Did you previously enable some security policy settings outside Sandboxie?
_No response_
### Crash dump
_No response_
### Trace log
_No response_
### Sandboxie.ini configuration
_No response_
|
non_process
|
openclipboard n does not block clipboard if the program is forced running what happened title says here are the scenarios that were tested ❌ a program executed from forcefolder can still access the clipboard ❌ a program executed via forceprocess can still access the clipboard ✔ manually running the program via context menu and selecting the sandbox with openclipboard n set works properly it cannot access the clipboard i will be using defaultbox hardened with openclipboard n and forcefolder d downloads for testing see the following images for the scenarios copy a random text directly execute the program on the forced location d downloads verify if the status if forced running paste the copied text from step copy a random text execute the program via the sandboxie context menu verify if the status if running paste the copied text from step download link n a to reproduce no response expected behavior clipboard access must be blockd what is your windows edition and version windows pro education in which windows account you have this problem a local or microsoft account without special changes please mention any installed security software symantec endpoint protection what version of sandboxie are you running is it a regression no response list of affected browsers no response in which sandbox type you have this problem not relevant to my request where is the program located not relevant to my request can you reproduce this problem on an empty sandbox not relevant to my request did you previously enable some security policy settings outside sandboxie no response crash dump no response trace log no response sandboxie ini configuration no response
| 0
|
72,719
| 9,605,982,532
|
IssuesEvent
|
2019-05-11 05:48:34
|
docker-solr/docker-solr
|
https://api.github.com/repos/docker-solr/docker-solr
|
closed
|
Solr and Zookeeper on Docker 1.12 Swarm
|
documentation
|
Will there be an updated example in the docs on how to use Solr and Zookeeper on 3 machines using Docker 1.12 Swarm?
|
1.0
|
Solr and Zookeeper on Docker 1.12 Swarm - Will there be an updated example in the docs on how to use Solr and Zookeeper on 3 machines using Docker 1.12 Swarm?
|
non_process
|
solr and zookeeper on docker swarm will there be an updated example in the docs on how to use solr and zookeeper on machines using docker swarm
| 0
|
13,505
| 5,391,807,449
|
IssuesEvent
|
2017-02-26 03:26:02
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
|
stat:awaiting response type:build/install
|
I got the following error when I run `python cifar10_train.py`.
```
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:390] Loaded runtime CuDNN library: 5005 (compatibility version 5000) but source was compiled with 5105 (compatibility version 5100). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
```
Operating System: Windows 10
CUDA: `Cuda compilation tools, release 8.0, V8.0.44`
cuDNN: 5.1
tensorflow: 1.0.0
The output from `python -c "import tensorflow; print(tensorflow.__version__)"`
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library curand64_80.dll locally
1.0.0
```
I have upgrade cudnn from 5.0 to 5.1. But it didn't work.
|
1.0
|
Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) - I got the following error when I run `python cifar10_train.py`.
```
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:390] Loaded runtime CuDNN library: 5005 (compatibility version 5000) but source was compiled with 5105 (compatibility version 5100). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
```
Operating System: Windows 10
CUDA: `Cuda compilation tools, release 8.0, V8.0.44`
cuDNN: 5.1
tensorflow: 1.0.0
The output from `python -c "import tensorflow; print(tensorflow.__version__)"`
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:135] successfully opened CUDA library curand64_80.dll locally
1.0.0
```
I have upgrade cudnn from 5.0 to 5.1. But it didn't work.
|
non_process
|
check failed stream parent getconvolvealgorithms algorithms i got the following error when i run python train py e c tf jenkins home workspace release win device gpu os windows tensorflow stream executor cuda cuda dnn cc loaded runtime cudnn library compatibility version but source was compiled with compatibility version if using a binary install upgrade your cudnn library to match if building from sources make sure the library loaded at runtime matches a compatible version specified during compile configuration f c tf jenkins home workspace release win device gpu os windows tensorflow core kernels conv ops cc check failed stream parent getconvolvealgorithms algorithms operating system windows cuda cuda compilation tools release cudnn tensorflow the output from python c import tensorflow print tensorflow version i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library nvcuda dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i have upgrade cudnn from to but it didn t work
| 0
|
2,811
| 5,738,573,777
|
IssuesEvent
|
2017-04-23 05:50:12
|
SIMEXP/niak
|
https://api.github.com/repos/SIMEXP/niak
|
closed
|
fread memory leak
|
bug preprocessing
|
loading big functional volume (like HCP rest one) make the niak_read_vol to stall for the eternity. The problem come from the fread function used in octave (matlab does not have this issue).
- One solution would be to make a temporary hack to fread big volumes by chunks
- Other solution that I don't get it yet is to chage the way we call fread like discussed here : http://octave.1599824.n4.nabble.com/memory-leak-td1625302.html
|
1.0
|
fread memory leak - loading big functional volume (like HCP rest one) make the niak_read_vol to stall for the eternity. The problem come from the fread function used in octave (matlab does not have this issue).
- One solution would be to make a temporary hack to fread big volumes by chunks
- Other solution that I don't get it yet is to chage the way we call fread like discussed here : http://octave.1599824.n4.nabble.com/memory-leak-td1625302.html
|
process
|
fread memory leak loading big functional volume like hcp rest one make the niak read vol to stall for the eternity the problem come from the fread function used in octave matlab does not have this issue one solution would be to make a temporary hack to fread big volumes by chunks other solution that i don t get it yet is to chage the way we call fread like discussed here
| 1
|
626,028
| 19,783,650,711
|
IssuesEvent
|
2022-01-18 02:13:04
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
Dionas broken healing
|
Bug :bug: Priority: Low
|
<!--
If a specific field doesn't apply, remove it!
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Joke or spammed issues can and will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Description of issue
If you will get shocked one (or more) time/s You will stop healing as dionea
#### Difference between expected and actual behavior
Heal>No heal = i want to heal
#### Steps to reproduce
Open some machine, zap the shit out of yourself > Enjoy not healing.
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
Vendors, doors, does not matter, it must be from hacking i guess.
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
Whenever you wish to zap yourself
#### Client version, Server revision & Game ID
<!-- Found with the "Show server revision" verb in the OOC tab in game. -->
Client Version: 511
Server Revision: 68953809e86159a9c8d8597afc0be1a88618ed91 - dev -
Game ID: bQl-bB7n
Current map: SEV Torch
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x ] Issue could be reproduced at least once
- [ ] Issue could be reproduced by different players
- [x ] Issue could be reproduced in multiple rounds
- [x ] Issue happened in a recent (less than 7 days ago) round
- [ ] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
1.0
|
Dionas broken healing - <!--
If a specific field doesn't apply, remove it!
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Joke or spammed issues can and will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Description of issue
If you will get shocked one (or more) time/s You will stop healing as dionea
#### Difference between expected and actual behavior
Heal>No heal = i want to heal
#### Steps to reproduce
Open some machine, zap the shit out of yourself > Enjoy not healing.
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
Vendors, doors, does not matter, it must be from hacking i guess.
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
Whenever you wish to zap yourself
#### Client version, Server revision & Game ID
<!-- Found with the "Show server revision" verb in the OOC tab in game. -->
Client Version: 511
Server Revision: 68953809e86159a9c8d8597afc0be1a88618ed91 - dev -
Game ID: bQl-bB7n
Current map: SEV Torch
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x ] Issue could be reproduced at least once
- [ ] Issue could be reproduced by different players
- [x ] Issue could be reproduced in multiple rounds
- [x ] Issue happened in a recent (less than 7 days ago) round
- [ ] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
non_process
|
dionas broken healing if a specific field doesn t apply remove it anything inside tags like these is a comment and will not be displayed in the final issue be careful not to write inside them joke or spammed issues can and will result in punishment put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting description of issue if you will get shocked one or more time s you will stop healing as dionea difference between expected and actual behavior heal no heal i want to heal steps to reproduce open some machine zap the shit out of yourself enjoy not healing specific information for locating vendors doors does not matter it must be from hacking i guess length of time in which bug has been known to occur be specific if you approximately know the time it s been occurring for—this can speed up finding the source if you re not sure about it tell us too whenever you wish to zap yourself client version server revision game id client version server revision dev game id bql current map sev torch issue bingo please check whatever applies more checkboxes checked increase your chances of the issue being looked at sooner issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round
| 0
|
241,471
| 18,457,603,610
|
IssuesEvent
|
2021-10-15 18:44:20
|
PennyLaneAI/pennylane
|
https://api.github.com/repos/PennyLaneAI/pennylane
|
closed
|
jax interface missing from QNode documentation
|
good first issue documentation :blue_book: Hacktoberfest
|
In the information about [interfaces in the QNode documentation](https://github.com/PennyLaneAI/pennylane/blob/949f5d52f3fd3806b051aa8d15cc5a7038d46832/pennylane/qnode.py#L53), the "jax" option is missing, even though the QNode can have a jax interface.
We need to have a bullet point for the `"jax"` option.
|
1.0
|
jax interface missing from QNode documentation - In the information about [interfaces in the QNode documentation](https://github.com/PennyLaneAI/pennylane/blob/949f5d52f3fd3806b051aa8d15cc5a7038d46832/pennylane/qnode.py#L53), the "jax" option is missing, even though the QNode can have a jax interface.
We need to have a bullet point for the `"jax"` option.
|
non_process
|
jax interface missing from qnode documentation in the information about the jax option is missing even though the qnode can have a jax interface we need to have a bullet point for the jax option
| 0
|
32,648
| 15,560,066,736
|
IssuesEvent
|
2021-03-16 12:18:28
|
treeverse/lakeFS
|
https://api.github.com/repos/treeverse/lakeFS
|
closed
|
Cache GetRepository, GetCommitByPrefix and GetCommit
|
performance
|
All these are in the critical path of most requests and are prime candidates for caching. Currently 50% of the time is spent on those for random reads.
To generate random read load:
`lakectl abuse random-read lakefs://repo1@a67a8fa0cfc598859a46ef652800f9d4e2d70db41a78c27f1500193118156068 --amount 1000000 --from-file ./randomfiles --parallelism $(nproc)`
While executing, generating a pprof profile using `http://<LAKEFS ADDR>/_pprof` and download a profile.
Commenting out the DB requests from these functions and returning a static repository/commit object increased throughput from ~11.5k r/s to ~18k r/s
|
True
|
Cache GetRepository, GetCommitByPrefix and GetCommit - All these are in the critical path of most requests and are prime candidates for caching. Currently 50% of the time is spent on those for random reads.
To generate random read load:
`lakectl abuse random-read lakefs://repo1@a67a8fa0cfc598859a46ef652800f9d4e2d70db41a78c27f1500193118156068 --amount 1000000 --from-file ./randomfiles --parallelism $(nproc)`
While executing, generating a pprof profile using `http://<LAKEFS ADDR>/_pprof` and download a profile.
Commenting out the DB requests from these functions and returning a static repository/commit object increased throughput from ~11.5k r/s to ~18k r/s
|
non_process
|
cache getrepository getcommitbyprefix and getcommit all these are in the critical path of most requests and are prime candidates for caching currently of the time is spent on those for random reads to generate random read load lakectl abuse random read lakefs amount from file randomfiles parallelism nproc while executing generating a pprof profile using addr pprof and download a profile commenting out the db requests from these functions and returning a static repository commit object increased throughput from r s to r s
| 0
|
26,562
| 2,684,871,078
|
IssuesEvent
|
2015-03-29 13:19:47
|
ConEmu/old-issues
|
https://api.github.com/repos/ConEmu/old-issues
|
closed
|
Too long cmd.exe start on new ConEmu version.
|
2–5 stars bug imported Priority-Medium
|
_From [ehysta](https://code.google.com/u/ehysta/) on June 02, 2013 08:31:00_
Required information! OS version: Win7 SP0 x64 ConEmu version: 130530 *Bug description* Too long delay before cmd.exe started: around 5 seconds (previous versions doesn't have such delay). *Steps to reproduction* 1. start timer
2. run ConEmu .exe
2. wait for shell prompt
3. stop timer
**Attachment:** [conemu delay.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=1082)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1082_
|
1.0
|
Too long cmd.exe start on new ConEmu version. - _From [ehysta](https://code.google.com/u/ehysta/) on June 02, 2013 08:31:00_
Required information! OS version: Win7 SP0 x64 ConEmu version: 130530 *Bug description* Too long delay before cmd.exe started: around 5 seconds (previous versions doesn't have such delay). *Steps to reproduction* 1. start timer
2. run ConEmu .exe
2. wait for shell prompt
3. stop timer
**Attachment:** [conemu delay.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=1082)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1082_
|
non_process
|
too long cmd exe start on new conemu version from on june required information os version conemu version bug description too long delay before cmd exe started around seconds previous versions doesn t have such delay steps to reproduction start timer run conemu exe wait for shell prompt stop timer attachment original issue
| 0
|
421,574
| 12,258,657,220
|
IssuesEvent
|
2020-05-06 15:25:24
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
APIM is setting operationId as the root element of the request
|
Priority/Normal Type/Improvement
|
### Describe your problem(s)
There is no place in WSDL-specification[1] which states that the root-element should be taking from 'name'-the attribute of operation-element for the SOAP 11 generated in sequence template. Therefore, for SOAP 11 we need to remove the Soap Action value from the root element for the generated in sequence template
[1] https://www.w3.org/TR/wsdl.html
### Describe your solution
<!-- Describe the feature/improvement -->
For SOAP 11 we need to remove the Soap Action value from the root element for the generated in sequence template
APIM version: APIM 2.6
|
1.0
|
APIM is setting operationId as the root element of the request - ### Describe your problem(s)
There is no place in WSDL-specification[1] which states that the root-element should be taking from 'name'-the attribute of operation-element for the SOAP 11 generated in sequence template. Therefore, for SOAP 11 we need to remove the Soap Action value from the root element for the generated in sequence template
[1] https://www.w3.org/TR/wsdl.html
### Describe your solution
<!-- Describe the feature/improvement -->
For SOAP 11 we need to remove the Soap Action value from the root element for the generated in sequence template
APIM version: APIM 2.6
|
non_process
|
apim is setting operationid as the root element of the request describe your problem s there is no place in wsdl specification which states that the root element should be taking from name the attribute of operation element for the soap generated in sequence template therefore for soap we need to remove the soap action value from the root element for the generated in sequence template describe your solution for soap we need to remove the soap action value from the root element for the generated in sequence template apim version apim
| 0
|
242,144
| 20,199,840,825
|
IssuesEvent
|
2022-02-11 14:15:11
|
eclipse-openj9/openj9
|
https://api.github.com/repos/eclipse-openj9/openj9
|
opened
|
OpenJDK FinalizationOption Test failed FinalizationOption.java:126
|
test failure triageRequired
|
https://openj9-jenkins.osuosl.org/job/Test_openjdk18_j9_sanity.openjdk_s390x_linux_Release/1 - [rh7-390-2](https://openj9-jenkins.osuosl.org/computer/rh7-390-2)
jdk_lang_1 `-Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:-UseCompressedOops`
java/lang/Object/FinalizationOption.java
```
17:34:17 ACTION: main -- Failed. Execution failed: `main' threw exception: java.lang.AssertionError: Test failed.
17:34:17 REASON: User specified action: run main/othervm --finalization=enabled FinalizationOption yes
17:34:17 TIME: 0.118 seconds
17:34:17 messages:
17:34:17 command: main --finalization=enabled FinalizationOption yes
17:34:17 reason: User specified action: run main/othervm --finalization=enabled FinalizationOption yes
17:34:17 Mode: othervm [/othervm specified]
17:34:17 elapsed time (seconds): 0.118
17:34:17 configuration:
17:34:17 STDOUT:
17:34:17 Finalizer thread. Expected: true Actual: (none) FAILED!
17:34:17 Call to finalize(). Expected: true Actual: true Passed.
17:34:17 STDERR:
17:34:17 java.lang.AssertionError: Test failed.
17:34:17 at FinalizationOption.main(FinalizationOption.java:126)
17:34:17 at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
17:34:17 at java.base/java.lang.reflect.Method.invoke(Method.java:577)
17:34:17 at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
17:34:17 at java.base/java.lang.Thread.run(Thread.java:889)
```
|
1.0
|
OpenJDK FinalizationOption Test failed FinalizationOption.java:126 - https://openj9-jenkins.osuosl.org/job/Test_openjdk18_j9_sanity.openjdk_s390x_linux_Release/1 - [rh7-390-2](https://openj9-jenkins.osuosl.org/computer/rh7-390-2)
jdk_lang_1 `-Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:-UseCompressedOops`
java/lang/Object/FinalizationOption.java
```
17:34:17 ACTION: main -- Failed. Execution failed: `main' threw exception: java.lang.AssertionError: Test failed.
17:34:17 REASON: User specified action: run main/othervm --finalization=enabled FinalizationOption yes
17:34:17 TIME: 0.118 seconds
17:34:17 messages:
17:34:17 command: main --finalization=enabled FinalizationOption yes
17:34:17 reason: User specified action: run main/othervm --finalization=enabled FinalizationOption yes
17:34:17 Mode: othervm [/othervm specified]
17:34:17 elapsed time (seconds): 0.118
17:34:17 configuration:
17:34:17 STDOUT:
17:34:17 Finalizer thread. Expected: true Actual: (none) FAILED!
17:34:17 Call to finalize(). Expected: true Actual: true Passed.
17:34:17 STDERR:
17:34:17 java.lang.AssertionError: Test failed.
17:34:17 at FinalizationOption.main(FinalizationOption.java:126)
17:34:17 at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
17:34:17 at java.base/java.lang.reflect.Method.invoke(Method.java:577)
17:34:17 at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
17:34:17 at java.base/java.lang.Thread.run(Thread.java:889)
```
|
non_process
|
openjdk finalizationoption test failed finalizationoption java jdk lang xdump system none xdump heap none xdump system events gpf abort traceassert corruptcache xx jitservertechpreviewmessage xx usecompressedoops java lang object finalizationoption java action main failed execution failed main threw exception java lang assertionerror test failed reason user specified action run main othervm finalization enabled finalizationoption yes time seconds messages command main finalization enabled finalizationoption yes reason user specified action run main othervm finalization enabled finalizationoption yes mode othervm elapsed time seconds configuration stdout finalizer thread expected true actual none failed call to finalize expected true actual true passed stderr java lang assertionerror test failed at finalizationoption main finalizationoption java at java base jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainwrapper mainthread run mainwrapper java at java base java lang thread run thread java
| 0
|
815
| 3,288,962,819
|
IssuesEvent
|
2015-10-29 17:03:52
|
spootTheLousy/saguaro
|
https://api.github.com/repos/spootTheLousy/saguaro
|
closed
|
Manager post doesn't allow html tags
|
Post/text processing
|
Manager posts are being sanitized the same way regular user posts are. I forget where exactly the text is sanitized in saguaro these days, but I'll look into it.
|
1.0
|
Manager post doesn't allow html tags - Manager posts are being sanitized the same way regular user posts are. I forget where exactly the text is sanitized in saguaro these days, but I'll look into it.
|
process
|
manager post doesn t allow html tags manager posts are being sanitized the same way regular user posts are i forget where exactly the text is sanitized in saguaro these days but i ll look into it
| 1
|
98,374
| 16,373,811,227
|
IssuesEvent
|
2021-05-15 17:39:30
|
hugh-whitesource/NodeGoat-1
|
https://api.github.com/repos/hugh-whitesource/NodeGoat-1
|
opened
|
WS-2019-0333 (High) detected in handlebars-4.0.5.tgz
|
security vulnerability
|
## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.5.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz</a></p>
<p>Path to dependency file: NodeGoat-1/package.json</p>
<p>Path to vulnerable library: NodeGoat-1/node_modules/nyc/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- istanbul-reports-1.0.0-alpha.8.tgz
- :x: **handlebars-4.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-reports:1.0.0-alpha.8;handlebars:4.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.5.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2019-0333","vulnerabilityDetails":"In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it\u0027s possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0333 (High) detected in handlebars-4.0.5.tgz - ## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.5.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz</a></p>
<p>Path to dependency file: NodeGoat-1/package.json</p>
<p>Path to vulnerable library: NodeGoat-1/node_modules/nyc/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- istanbul-reports-1.0.0-alpha.8.tgz
- :x: **handlebars-4.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-reports:1.0.0-alpha.8;handlebars:4.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.5.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2019-0333","vulnerabilityDetails":"In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it\u0027s possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules nyc node modules handlebars package json dependency hierarchy grunt if tgz root library grunt contrib nodeunit tgz nodeunit tgz tap tgz nyc tgz istanbul reports alpha tgz x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details in handlebars versions prior to are vulnerable to prototype pollution using a malicious template it s possbile to add or modify properties to the object prototype this can also lead to dos and rce in certain conditions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt if grunt contrib nodeunit nodeunit tap nyc istanbul reports alpha handlebars isminimumfixversionavailable true minimumfixversion handlebars basebranches vulnerabilityidentifier ws vulnerabilitydetails in handlebars versions prior to are vulnerable to prototype pollution using a malicious template it possbile to add or modify properties to the object prototype this can also lead to dos and rce in certain conditions vulnerabilityurl
| 0
|
3,614
| 6,653,670,435
|
IssuesEvent
|
2017-09-29 09:22:18
|
uvacw/inca
|
https://api.github.com/repos/uvacw/inca
|
closed
|
integrate pattern for PY3
|
PROCESSORS
|
Pattern (https://github.com/clips/pattern) seems to be finally PY3-compatible, thus, we can transfer processors from legacy INCA to the recent version
|
1.0
|
integrate pattern for PY3 - Pattern (https://github.com/clips/pattern) seems to be finally PY3-compatible, thus, we can transfer processors from legacy INCA to the recent version
|
process
|
integrate pattern for pattern seems to be finally compatible thus we can transfer processors from legacy inca to the recent version
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.