Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
221,966
| 24,673,486,040
|
IssuesEvent
|
2022-10-18 15:15:38
|
wagner-deoliveira/mobile-expo
|
https://api.github.com/repos/wagner-deoliveira/mobile-expo
|
closed
|
CVE-2022-37616 (High) detected in xmldom-0.7.5.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-37616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.7.5.tgz</b></p></summary>
<p>A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/@xmldom/xmldom/-/xmldom-0.7.5.tgz">https://registry.npmjs.org/@xmldom/xmldom/-/xmldom-0.7.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@xmldom/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- expo-44.0.6.tgz (Root Library)
- metro-config-0.2.8.tgz
- config-6.0.6.tgz
- config-plugins-4.0.6.tgz
- plist-0.0.15.tgz
- :x: **xmldom-0.7.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wagner-deoliveira/mobile-expo/commit/7fe36112d3d63b30268969f509f9eb39f8ee6eb8">7fe36112d3d63b30268969f509f9eb39f8ee6eb8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability exists in the function copy in dom.js in the xmldom (published as @xmldom/xmldom) package before 0.8.3 for Node.js via the p variable.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37616>CVE-2022-37616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-37616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-37616</a></p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: @xmldom/xmldom - 0.8.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-37616 (High) detected in xmldom-0.7.5.tgz - autoclosed - ## CVE-2022-37616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.7.5.tgz</b></p></summary>
<p>A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/@xmldom/xmldom/-/xmldom-0.7.5.tgz">https://registry.npmjs.org/@xmldom/xmldom/-/xmldom-0.7.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@xmldom/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- expo-44.0.6.tgz (Root Library)
- metro-config-0.2.8.tgz
- config-6.0.6.tgz
- config-plugins-4.0.6.tgz
- plist-0.0.15.tgz
- :x: **xmldom-0.7.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wagner-deoliveira/mobile-expo/commit/7fe36112d3d63b30268969f509f9eb39f8ee6eb8">7fe36112d3d63b30268969f509f9eb39f8ee6eb8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability exists in the function copy in dom.js in the xmldom (published as @xmldom/xmldom) package before 0.8.3 for Node.js via the p variable.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37616>CVE-2022-37616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-37616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-37616</a></p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: @xmldom/xmldom - 0.8.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xmldom tgz autoclosed cve high severity vulnerability vulnerable library xmldom tgz a pure javascript standard based xml dom level core domparser and xmlserializer module library home page a href path to dependency file package json path to vulnerable library node modules xmldom xmldom package json dependency hierarchy expo tgz root library metro config tgz config tgz config plugins tgz plist tgz x xmldom tgz vulnerable library found in head commit a href found in base branch master vulnerability details a prototype pollution vulnerability exists in the function copy in dom js in the xmldom published as xmldom xmldom package before for node js via the p variable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmldom xmldom step up your open source security game with mend
| 0
|
5,429
| 8,290,368,225
|
IssuesEvent
|
2018-09-19 17:09:11
|
aspnet/IISIntegration
|
https://api.github.com/repos/aspnet/IISIntegration
|
opened
|
WindowsAuthTest test failure
|
V1 out-of-process test-failure
|
```
Failed Microsoft.AspNetCore.Server.IISIntegration.FunctionalTests.WindowsAuthTests.WindowsAuthTest(variant: Server: IISExpress, TFM: net461, Type: Standalone, Arch: x64, ANCM: V1, Host: OutOfProcess)
2018-09-19T16:17:09.3665092Z Error Message:
2018-09-19T16:17:09.3675680Z Assert.Equal() Failure
2018-09-19T16:17:09.3677081Z (pos 0)
2018-09-19T16:17:09.3681597Z Expected: Windows
2018-09-19T16:17:09.3684918Z Actual: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML ···
2018-09-19T16:17:09.3687977Z (pos 0)
2018-09-19T16:17:09.3695059Z Stack Trace:
2018-09-19T16:17:09.3700487Z at Microsoft.AspNetCore.Server.IISIntegration.FunctionalTests.WindowsAuthTests.WindowsAuthTest(TestVariant variant) in /_/test/IISExpress.FunctionalTests/WindowsAuthTests.cs:line 45
2018-09-19T16:17:09.3704443Z --- End of stack trace from previous location where exception was thrown ---
2018-09-19T16:17:09.3710641Z Standard Output Messages:
2018-09-19T16:17:09.3714449Z | [0.001s] TestLifetime Information: Starting test WindowsAuthTest-Server: IISExpress, TFM: net461, Type: Standalone, Arch: x64, ANCM: V1, Host: OutOfProcess at 2018-09-19T16:16:08
2018-09-19T16:17:09.3717600Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Deploying [Variation] :: ServerType=IISExpress, Runtime=Clr, Arch=x64, BaseUrlHint=, Publish=False
2018-09-19T16:17:09.3722531Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Executing: D:\a\1\s\test\WebSites\OutOfProcessWebSite\bin\x64\Release\net461\OutOfProcessWebSite.exe
2018-09-19T16:17:09.3726120Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: ContentRoot: D:\a\1\s\test\WebSites\OutOfProcessWebSite
2018-09-19T16:17:09.3730150Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Attempting to start IIS Express on port: 2210
2018-09-19T16:17:09.3733564Z | [0.005s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Debug: Saving Config to C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp
2018-09-19T16:17:09.3739639Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Executing command : C:\Program Files\IIS Express\iisexpress.exe /site:HttpTestSite /config:C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp /trace:error /systray:false
2018-09-19T16:17:09.3743786Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Removing environment variable ASPNETCORE_ENVIRONMENT
2018-09-19T16:17:09.3747272Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_DETAILEDERRORS=true
2018-09-19T16:17:09.3752172Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_MODULE_DEBUG=console
2018-09-19T16:17:09.3755883Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET LAUNCHER_PATH=D:\a\1\s\test\WebSites\OutOfProcessWebSite\bin\x64\Release\net461\OutOfProcessWebSite.exe
2018-09-19T16:17:09.3759824Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET LAUNCHER_ARGS=
2018-09-19T16:17:09.3764319Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_CONTENTROOT=D:\a\1\s\test\WebSites\OutOfProcessWebSite
2018-09-19T16:17:09.3768953Z | [0.013s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress Process 6696 started
2018-09-19T16:17:09.3772500Z | [1.773s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Starting IIS Express ...
2018-09-19T16:17:09.3778635Z | [2.185s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Initializing the W3 Server Started CTC = 1750546
2018-09-19T16:17:09.3782843Z | [2.206s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server initializing WinSock. CTC = 1750562
2018-09-19T16:17:09.3786668Z | [2.206s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server WinSock initialized. CTC = 1750562
2018-09-19T16:17:09.3791008Z | [2.207s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server ThreadPool initialized (ipm has signalled). CTC = 1750562
2018-09-19T16:17:09.3794752Z | [2.280s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Start listenerChannel http:0
2018-09-19T16:17:09.3798083Z | [2.282s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Successfully registered URL "http://localhost:2210/" for site "HttpTestSite" application "/"
2018-09-19T16:17:09.3802494Z | [2.283s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Registration completed for site "HttpTestSite"
2018-09-19T16:17:09.3805724Z | [2.284s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: AppPool 'IISExpressAppPool' initialized
2018-09-19T16:17:09.3809460Z | [2.285s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: InitComplete event signalled
2018-09-19T16:17:09.3813436Z | [2.286s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: IIS Express is running.
2018-09-19T16:17:09.3816811Z | [2.286s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Started iisexpress successfully. Process Id : 6696, Port: 2210
2018-09-19T16:17:09.3820270Z | [2.289s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Application ready at URL: http://localhost:2210/
2018-09-19T16:17:09.3823987Z | [2.292s] HttpTestSite Debug: Method: GET, RequestUri: 'http://localhost:2210/Auth', Version: 2.0, Content: <null>, Headers:
2018-09-19T16:17:09.3827157Z | {
2018-09-19T16:17:09.3830498Z | }
2018-09-19T16:17:09.3834615Z | [2.294s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: IncrementMessages called
2018-09-19T16:17:09.3838077Z | [2.301s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Request started: "GET" http://localhost:2210/Auth
2018-09-19T16:17:09.3841982Z | [59.558s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Request ended: http://localhost:2210/Auth with HTTP status 502.5
2018-09-19T16:17:09.3845356Z | [59.565s] HttpTestSite Warning: StatusCode: 502, ReasonPhrase: 'Bad Gateway', Version: 1.1, Content: System.Net.Http.HttpConnection+HttpConnectionResponseContent, Headers:
2018-09-19T16:17:09.3849040Z | {
2018-09-19T16:17:09.3852447Z | Server: Microsoft-IIS/10.0
2018-09-19T16:17:09.3856298Z | X-SourceFiles: =?UTF-8?B?RDpcYVwxXHNcdGVzdFxXZWJTaXRlc1xPdXRPZlByb2Nlc3NXZWJTaXRlXEF1dGg=?=
2018-09-19T16:17:09.3860269Z | X-Powered-By: ASP.NET
2018-09-19T16:17:09.3863271Z | Date: Wed, 19 Sep 2018 16:17:08 GMT
2018-09-19T16:17:09.3866734Z | Content-Type: text/html
2018-09-19T16:17:09.3870212Z | Content-Length: 1468
2018-09-19T16:17:09.3873634Z | }
2018-09-19T16:17:09.3879540Z | [59.565s] HttpTestSite Warning: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title> IIS 502.5 Error </title><style type="text/css"></style></head> <body> <div id = "content"> <div class = "content-container"><h3> HTTP Error 502.5 - Process Failure </h3></div> <div class = "content-container"> <fieldset> <h4> Common causes of this issue: </h4> <ul><li> The application process failed to start </li> <li> The application process started but then stopped </li> <li> The application process started but failed to listen on the configured port </li></ul></fieldset> </div> <div class = "content-container"> <fieldset><h4> Troubleshooting steps: </h4> <ul><li> Check the system event log for error messages </li> <li> Enable logging the application process' stdout messages </li> <li> Attach a debugger to the application process and inspect </li></ul></fieldset> <fieldset><h4> For more information visit: <a href="https://go.microsoft.com/fwlink/?linkid=808681"> <cite> https://go.microsoft.com/fwlink/?LinkID=808681 </cite></a></h4> </fieldset> </div> </div></body></html>
2018-09-19T16:17:09.3884314Z | [59.571s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Attempting to cancel process 6696
2018-09-19T16:17:09.3887431Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress Process 6696 shut down
2018-09-19T16:17:09.3891863Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Host process shutting down.
2018-09-19T16:17:09.3895481Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Successfully terminated host process with process Id '6696'
2018-09-19T16:17:09.3899769Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Debug: Deleting applicationHost.config file from C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp
2018-09-19T16:17:09.3903504Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: [Time]: Total time taken for this test variation '59.644712' seconds
2018-09-19T16:17:09.3906572Z
```
|
1.0
|
WindowsAuthTest test failure - ```
Failed Microsoft.AspNetCore.Server.IISIntegration.FunctionalTests.WindowsAuthTests.WindowsAuthTest(variant: Server: IISExpress, TFM: net461, Type: Standalone, Arch: x64, ANCM: V1, Host: OutOfProcess)
2018-09-19T16:17:09.3665092Z Error Message:
2018-09-19T16:17:09.3675680Z Assert.Equal() Failure
2018-09-19T16:17:09.3677081Z (pos 0)
2018-09-19T16:17:09.3681597Z Expected: Windows
2018-09-19T16:17:09.3684918Z Actual: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML ···
2018-09-19T16:17:09.3687977Z (pos 0)
2018-09-19T16:17:09.3695059Z Stack Trace:
2018-09-19T16:17:09.3700487Z at Microsoft.AspNetCore.Server.IISIntegration.FunctionalTests.WindowsAuthTests.WindowsAuthTest(TestVariant variant) in /_/test/IISExpress.FunctionalTests/WindowsAuthTests.cs:line 45
2018-09-19T16:17:09.3704443Z --- End of stack trace from previous location where exception was thrown ---
2018-09-19T16:17:09.3710641Z Standard Output Messages:
2018-09-19T16:17:09.3714449Z | [0.001s] TestLifetime Information: Starting test WindowsAuthTest-Server: IISExpress, TFM: net461, Type: Standalone, Arch: x64, ANCM: V1, Host: OutOfProcess at 2018-09-19T16:16:08
2018-09-19T16:17:09.3717600Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Deploying [Variation] :: ServerType=IISExpress, Runtime=Clr, Arch=x64, BaseUrlHint=, Publish=False
2018-09-19T16:17:09.3722531Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Executing: D:\a\1\s\test\WebSites\OutOfProcessWebSite\bin\x64\Release\net461\OutOfProcessWebSite.exe
2018-09-19T16:17:09.3726120Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: ContentRoot: D:\a\1\s\test\WebSites\OutOfProcessWebSite
2018-09-19T16:17:09.3730150Z | [0.002s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Attempting to start IIS Express on port: 2210
2018-09-19T16:17:09.3733564Z | [0.005s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Debug: Saving Config to C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp
2018-09-19T16:17:09.3739639Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Executing command : C:\Program Files\IIS Express\iisexpress.exe /site:HttpTestSite /config:C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp /trace:error /systray:false
2018-09-19T16:17:09.3743786Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Removing environment variable ASPNETCORE_ENVIRONMENT
2018-09-19T16:17:09.3747272Z | [0.006s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_DETAILEDERRORS=true
2018-09-19T16:17:09.3752172Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_MODULE_DEBUG=console
2018-09-19T16:17:09.3755883Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET LAUNCHER_PATH=D:\a\1\s\test\WebSites\OutOfProcessWebSite\bin\x64\Release\net461\OutOfProcessWebSite.exe
2018-09-19T16:17:09.3759824Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET LAUNCHER_ARGS=
2018-09-19T16:17:09.3764319Z | [0.007s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: SET ASPNETCORE_CONTENTROOT=D:\a\1\s\test\WebSites\OutOfProcessWebSite
2018-09-19T16:17:09.3768953Z | [0.013s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress Process 6696 started
2018-09-19T16:17:09.3772500Z | [1.773s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Starting IIS Express ...
2018-09-19T16:17:09.3778635Z | [2.185s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Initializing the W3 Server Started CTC = 1750546
2018-09-19T16:17:09.3782843Z | [2.206s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server initializing WinSock. CTC = 1750562
2018-09-19T16:17:09.3786668Z | [2.206s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server WinSock initialized. CTC = 1750562
2018-09-19T16:17:09.3791008Z | [2.207s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: W3 Server ThreadPool initialized (ipm has signalled). CTC = 1750562
2018-09-19T16:17:09.3794752Z | [2.280s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Start listenerChannel http:0
2018-09-19T16:17:09.3798083Z | [2.282s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Successfully registered URL "http://localhost:2210/" for site "HttpTestSite" application "/"
2018-09-19T16:17:09.3802494Z | [2.283s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Registration completed for site "HttpTestSite"
2018-09-19T16:17:09.3805724Z | [2.284s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: AppPool 'IISExpressAppPool' initialized
2018-09-19T16:17:09.3809460Z | [2.285s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: InitComplete event signalled
2018-09-19T16:17:09.3813436Z | [2.286s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: IIS Express is running.
2018-09-19T16:17:09.3816811Z | [2.286s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Started iisexpress successfully. Process Id : 6696, Port: 2210
2018-09-19T16:17:09.3820270Z | [2.289s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Application ready at URL: http://localhost:2210/
2018-09-19T16:17:09.3823987Z | [2.292s] HttpTestSite Debug: Method: GET, RequestUri: 'http://localhost:2210/Auth', Version: 2.0, Content: <null>, Headers:
2018-09-19T16:17:09.3827157Z | {
2018-09-19T16:17:09.3830498Z | }
2018-09-19T16:17:09.3834615Z | [2.294s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: IncrementMessages called
2018-09-19T16:17:09.3838077Z | [2.301s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Request started: "GET" http://localhost:2210/Auth
2018-09-19T16:17:09.3841982Z | [59.558s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress stdout: Request ended: http://localhost:2210/Auth with HTTP status 502.5
2018-09-19T16:17:09.3845356Z | [59.565s] HttpTestSite Warning: StatusCode: 502, ReasonPhrase: 'Bad Gateway', Version: 1.1, Content: System.Net.Http.HttpConnection+HttpConnectionResponseContent, Headers:
2018-09-19T16:17:09.3849040Z | {
2018-09-19T16:17:09.3852447Z | Server: Microsoft-IIS/10.0
2018-09-19T16:17:09.3856298Z | X-SourceFiles: =?UTF-8?B?RDpcYVwxXHNcdGVzdFxXZWJTaXRlc1xPdXRPZlByb2Nlc3NXZWJTaXRlXEF1dGg=?=
2018-09-19T16:17:09.3860269Z | X-Powered-By: ASP.NET
2018-09-19T16:17:09.3863271Z | Date: Wed, 19 Sep 2018 16:17:08 GMT
2018-09-19T16:17:09.3866734Z | Content-Type: text/html
2018-09-19T16:17:09.3870212Z | Content-Length: 1468
2018-09-19T16:17:09.3873634Z | }
2018-09-19T16:17:09.3879540Z | [59.565s] HttpTestSite Warning: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title> IIS 502.5 Error </title><style type="text/css"></style></head> <body> <div id = "content"> <div class = "content-container"><h3> HTTP Error 502.5 - Process Failure </h3></div> <div class = "content-container"> <fieldset> <h4> Common causes of this issue: </h4> <ul><li> The application process failed to start </li> <li> The application process started but then stopped </li> <li> The application process started but failed to listen on the configured port </li></ul></fieldset> </div> <div class = "content-container"> <fieldset><h4> Troubleshooting steps: </h4> <ul><li> Check the system event log for error messages </li> <li> Enable logging the application process' stdout messages </li> <li> Attach a debugger to the application process and inspect </li></ul></fieldset> <fieldset><h4> For more information visit: <a href="https://go.microsoft.com/fwlink/?linkid=808681"> <cite> https://go.microsoft.com/fwlink/?LinkID=808681 </cite></a></h4> </fieldset> </div> </div></body></html>
2018-09-19T16:17:09.3884314Z | [59.571s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Attempting to cancel process 6696
2018-09-19T16:17:09.3887431Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: iisexpress Process 6696 shut down
2018-09-19T16:17:09.3891863Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Host process shutting down.
2018-09-19T16:17:09.3895481Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: Successfully terminated host process with process Id '6696'
2018-09-19T16:17:09.3899769Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Debug: Deleting applicationHost.config file from C:\Users\VssAdministrator\AppData\Local\Temp\tmpADFA.tmp
2018-09-19T16:17:09.3903504Z | [59.647s] Microsoft.AspNetCore.Server.IntegrationTesting.IIS.IISExpressDeployer Information: [Time]: Total time taken for this test variation '59.644712' seconds
2018-09-19T16:17:09.3906572Z
```
|
process
|
windowsauthtest test failure failed microsoft aspnetcore server iisintegration functionaltests windowsauthtests windowsauthtest variant server iisexpress tfm type standalone arch ancm host outofprocess error message assert equal failure pos expected windows actual doctype html public dtd xhtml ··· pos stack trace at microsoft aspnetcore server iisintegration functionaltests windowsauthtests windowsauthtest testvariant variant in test iisexpress functionaltests windowsauthtests cs line end of stack trace from previous location where exception was thrown standard output messages testlifetime information starting test windowsauthtest server iisexpress tfm type standalone arch ancm host outofprocess at microsoft aspnetcore server integrationtesting iis iisexpressdeployer information deploying servertype iisexpress runtime clr arch baseurlhint publish false microsoft aspnetcore server integrationtesting iis iisexpressdeployer information executing d a s test websites outofprocesswebsite bin release outofprocesswebsite exe microsoft aspnetcore server integrationtesting iis iisexpressdeployer information contentroot d a s test websites outofprocesswebsite microsoft aspnetcore server integrationtesting iis iisexpressdeployer information attempting to start iis express on port microsoft aspnetcore server integrationtesting iis iisexpressdeployer debug saving config to c users vssadministrator appdata local temp tmpadfa tmp microsoft aspnetcore server integrationtesting iis iisexpressdeployer information executing command c program files iis express iisexpress exe site httptestsite config c users vssadministrator appdata local temp tmpadfa tmp trace error systray false microsoft aspnetcore server integrationtesting iis iisexpressdeployer information removing environment variable aspnetcore environment microsoft aspnetcore server integrationtesting iis iisexpressdeployer information set aspnetcore detailederrors true microsoft aspnetcore server integrationtesting iis iisexpressdeployer information set aspnetcore module debug console microsoft aspnetcore server integrationtesting iis iisexpressdeployer information set launcher path d a s test websites outofprocesswebsite bin release outofprocesswebsite exe microsoft aspnetcore server integrationtesting iis iisexpressdeployer information set launcher args microsoft aspnetcore server integrationtesting iis iisexpressdeployer information set aspnetcore contentroot d a s test websites outofprocesswebsite microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress process started microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout starting iis express microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout initializing the server started ctc microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout server initializing winsock ctc microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout server winsock initialized ctc microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout server threadpool initialized ipm has signalled ctc microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout start listenerchannel http microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout successfully registered url for site httptestsite application microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout registration completed for site httptestsite microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout apppool iisexpressapppool initialized microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout initcomplete event signalled microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout iis express is running microsoft aspnetcore server integrationtesting iis iisexpressdeployer information started iisexpress successfully process id port microsoft aspnetcore server integrationtesting iis iisexpressdeployer information application ready at url httptestsite debug method get requesturi version content headers microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout incrementmessages called microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout request started get microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress stdout request ended with http status httptestsite warning statuscode reasonphrase bad gateway version content system net http httpconnection httpconnectionresponsecontent headers server microsoft iis x sourcefiles utf b x powered by asp net date wed sep gmt content type text html content length httptestsite warning iis error http error process failure common causes of this issue the application process failed to start the application process started but then stopped the application process started but failed to listen on the configured port troubleshooting steps check the system event log for error messages enable logging the application process stdout messages attach a debugger to the application process and inspect for more information visit microsoft aspnetcore server integrationtesting iis iisexpressdeployer information attempting to cancel process microsoft aspnetcore server integrationtesting iis iisexpressdeployer information iisexpress process shut down microsoft aspnetcore server integrationtesting iis iisexpressdeployer information host process shutting down microsoft aspnetcore server integrationtesting iis iisexpressdeployer information successfully terminated host process with process id microsoft aspnetcore server integrationtesting iis iisexpressdeployer debug deleting applicationhost config file from c users vssadministrator appdata local temp tmpadfa tmp microsoft aspnetcore server integrationtesting iis iisexpressdeployer information total time taken for this test variation seconds
| 1
|
7,410
| 10,660,781,072
|
IssuesEvent
|
2019-10-18 10:42:16
|
dropmino/Netbooks
|
https://api.github.com/repos/dropmino/Netbooks
|
opened
|
Leave reviews
|
Functional Requirement
|
The system shall provide the possibility to leave reviews that contains:
- username of the authors;
- subject;
- object where are specified the reasons why the book was liked or not.
|
1.0
|
Leave reviews - The system shall provide the possibility to leave reviews that contains:
- username of the authors;
- subject;
- object where are specified the reasons why the book was liked or not.
|
non_process
|
leave reviews the system shall provide the possibility to leave reviews that contains username of the authors subject object where are specified the reasons why the book was liked or not
| 0
|
16,140
| 20,403,722,569
|
IssuesEvent
|
2022-02-23 01:03:02
|
fmnas/fmnas-site
|
https://api.github.com/repos/fmnas/fmnas-site
|
closed
|
Asynchronous attachment upload
|
enhancement public frontend backend form processor medium (3-8h)
|
---
_This issue has been automatically created by [todo-actions](https://github.com/apps/todo-actions) based on a TODO comment found in [public/application/index.php:835](https://github.com/fmnas/fmnas-site/blob/main/public/application/index.php#L835). It will automatically be closed when the TODO comment is removed from the default branch (main)._
|
1.0
|
Asynchronous attachment upload -
---
_This issue has been automatically created by [todo-actions](https://github.com/apps/todo-actions) based on a TODO comment found in [public/application/index.php:835](https://github.com/fmnas/fmnas-site/blob/main/public/application/index.php#L835). It will automatically be closed when the TODO comment is removed from the default branch (main)._
|
process
|
asynchronous attachment upload this issue has been automatically created by based on a todo comment found in it will automatically be closed when the todo comment is removed from the default branch main
| 1
|
5,446
| 8,307,310,781
|
IssuesEvent
|
2018-09-23 07:30:56
|
u-root/u-bmc
|
https://api.github.com/repos/u-root/u-bmc
|
opened
|
Figure out automated regression/integration testing
|
process
|
I'm not sure how we want to do this.
The options I've been thinking about are:
- Compile time redirection to use mocks.
This is essentially unit tests, and we should do them anyway.
- Using QEMU.
QEMU is very limited when it comes to the ast2400 and creating a pseudo platform for it risks missing a lot of things.
- Using gVisor.
A lot of work and forces us to re-implement the ast2400 in gVisor.
- Hardware
Hardware is the best option, but it's slow to work with. Nonetheless it seems like the most promising candidate right now.
When the u-boot console issue has been solved and we can verify that NC-SI works in u-boot, then it should be possible to netboot from u-boot. That should be fast enough for on-demand testing in some fashion.
|
1.0
|
Figure out automated regression/integration testing - I'm not sure how we want to do this.
The options I've been thinking about are:
- Compile time redirection to use mocks.
This is essentially unit tests, and we should do them anyway.
- Using QEMU.
QEMU is very limited when it comes to the ast2400 and creating a pseudo platform for it risks missing a lot of things.
- Using gVisor.
A lot of work and forces us to re-implement the ast2400 in gVisor.
- Hardware
Hardware is the best option, but it's slow to work with. Nonetheless it seems like the most promising candidate right now.
When the u-boot console issue has been solved and we can verify that NC-SI works in u-boot, then it should be possible to netboot from u-boot. That should be fast enough for on-demand testing in some fashion.
|
process
|
figure out automated regression integration testing i m not sure how we want to do this the options i ve been thinking about are compile time redirection to use mocks this is essentially unit tests and we should do them anyway using qemu qemu is very limited when it comes to the and creating a pseudo platform for it risks missing a lot of things using gvisor a lot of work and forces us to re implement the in gvisor hardware hardware is the best option but it s slow to work with nonetheless it seems like the most promising candidate right now when the u boot console issue has been solved and we can verify that nc si works in u boot then it should be possible to netboot from u boot that should be fast enough for on demand testing in some fashion
| 1
|
8,276
| 11,431,851,552
|
IssuesEvent
|
2020-02-04 13:01:17
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
opened
|
SYSLOGBASE Grok pattern creates invalid "timestamp" value
|
bug processing
|
## Expected Behavior
Using the `SYSLOGBASE` Grok pattern should create a valid `timestamp` field value in the message object.
## Current Behavior
The `SYSLOGBASE` Grok pattern is creating the `timestamp` value as `String` type instead of a date type.
This is the current definition of the `SYSLOGBASE` Grok pattern:
```
%{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
```
The `SYSLOGTIMESTAMP` value will be written as `timestamp` in the message. Since there is no `date` converter on that pattern, the value type will be `String`.
In `Message#toElasticsearchObject()` we handle invalid `timestamp` fields by trying to parse the strings. If that fails, we use the current timestamp as `timestamp` value as a fallback.
## Possible Solution
One possible solution could be to modify the `SYSLOGBASE` pattern to include a `date` converter for the `SYSLOGTIMESTAMP` pattern. We do the same for parsing Apache log timestamps:
```
%{HTTPDATE:timestamp;date;dd/MMM/yyyy:HH:mm:ss Z}
```
The problem is, that we might need more than one date format to parse syslog dates. For single digit days we need `MMM d HH:mm:ss yyyy` and for multi digit days `MMM dd HH:mm:ss yyyy`.
Not sure if we can construct a single date format that can handle both.
Another way to fix this would be to adjust the Grok library we are using (we maintain a fork) and extend it to handle multiple date formats.
## Steps to Reproduce (for bugs)
1. Build an extractor or pipeline rule to parse syslog messages with a Grok pattern
2. Inspect the `timestamp` field
3. Check that indexed timestamp of the message isn't the same as the syslog timestamp (it will be the current time)
## Your Environment
* Graylog Version: 3.2.0
|
1.0
|
SYSLOGBASE Grok pattern creates invalid "timestamp" value - ## Expected Behavior
Using the `SYSLOGBASE` Grok pattern should create a valid `timestamp` field value in the message object.
## Current Behavior
The `SYSLOGBASE` Grok pattern is creating the `timestamp` value as `String` type instead of a date type.
This is the current definition of the `SYSLOGBASE` Grok pattern:
```
%{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
```
The `SYSLOGTIMESTAMP` value will be written as `timestamp` in the message. Since there is no `date` converter on that pattern, the value type will be `String`.
In `Message#toElasticsearchObject()` we handle invalid `timestamp` fields by trying to parse the strings. If that fails, we use the current timestamp as `timestamp` value as a fallback.
## Possible Solution
One possible solution could be to modify the `SYSLOGBASE` pattern to include a `date` converter for the `SYSLOGTIMESTAMP` pattern. We do the same for parsing Apache log timestamps:
```
%{HTTPDATE:timestamp;date;dd/MMM/yyyy:HH:mm:ss Z}
```
The problem is, that we might need more than one date format to parse syslog dates. For single digit days we need `MMM d HH:mm:ss yyyy` and for multi digit days `MMM dd HH:mm:ss yyyy`.
Not sure if we can construct a single date format that can handle both.
Another way to fix this would be to adjust the Grok library we are using (we maintain a fork) and extend it to handle multiple date formats.
## Steps to Reproduce (for bugs)
1. Build an extractor or pipeline rule to parse syslog messages with a Grok pattern
2. Inspect the `timestamp` field
3. Check that indexed timestamp of the message isn't the same as the syslog timestamp (it will be the current time)
## Your Environment
* Graylog Version: 3.2.0
|
process
|
syslogbase grok pattern creates invalid timestamp value expected behavior using the syslogbase grok pattern should create a valid timestamp field value in the message object current behavior the syslogbase grok pattern is creating the timestamp value as string type instead of a date type this is the current definition of the syslogbase grok pattern syslogtimestamp timestamp syslogfacility sysloghost logsource syslogprog the syslogtimestamp value will be written as timestamp in the message since there is no date converter on that pattern the value type will be string in message toelasticsearchobject we handle invalid timestamp fields by trying to parse the strings if that fails we use the current timestamp as timestamp value as a fallback possible solution one possible solution could be to modify the syslogbase pattern to include a date converter for the syslogtimestamp pattern we do the same for parsing apache log timestamps httpdate timestamp date dd mmm yyyy hh mm ss z the problem is that we might need more than one date format to parse syslog dates for single digit days we need mmm d hh mm ss yyyy and for multi digit days mmm dd hh mm ss yyyy not sure if we can construct a single date format that can handle both another way to fix this would be to adjust the grok library we are using we maintain a fork and extend it to handle multiple date formats steps to reproduce for bugs build an extractor or pipeline rule to parse syslog messages with a grok pattern inspect the timestamp field check that indexed timestamp of the message isn t the same as the syslog timestamp it will be the current time your environment graylog version
| 1
|
17,249
| 23,033,104,205
|
IssuesEvent
|
2022-07-22 15:41:45
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Filter on Span Kind
|
enhancement processor/transform
|
**Is your feature request related to a problem? Please describe.**
Filter on span kind
**Describe the solution you'd like**
Only capture spans for certain kind, (like consumer and server). This way, can get spanmetrics for just the entry points to each task.
**Describe alternatives you've considered**
Tried to use transform to add an attribute by matching on kind, but it doesn't appear to work on matching against kind.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
Filter on Span Kind - **Is your feature request related to a problem? Please describe.**
Filter on span kind
**Describe the solution you'd like**
Only capture spans for certain kind, (like consumer and server). This way, can get spanmetrics for just the entry points to each task.
**Describe alternatives you've considered**
Tried to use transform to add an attribute by matching on kind, but it doesn't appear to work on matching against kind.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
filter on span kind is your feature request related to a problem please describe filter on span kind describe the solution you d like only capture spans for certain kind like consumer and server this way can get spanmetrics for just the entry points to each task describe alternatives you ve considered tried to use transform to add an attribute by matching on kind but it doesn t appear to work on matching against kind additional context add any other context or screenshots about the feature request here
| 1
|
30,872
| 25,132,552,737
|
IssuesEvent
|
2022-11-09 16:07:24
|
getodk/collect
|
https://api.github.com/repos/getodk/collect
|
opened
|
Rework module naming
|
infrastructure
|
Currently, we have modules named without separators, with a `-` separator and a `_` separator. We should make these consistent. Additionally, it would be good to consider changing `collect_app` to `app` to match with Android convention.
|
1.0
|
Rework module naming - Currently, we have modules named without separators, with a `-` separator and a `_` separator. We should make these consistent. Additionally, it would be good to consider changing `collect_app` to `app` to match with Android convention.
|
non_process
|
rework module naming currently we have modules named without separators with a separator and a separator we should make these consistent additionally it would be good to consider changing collect app to app to match with android convention
| 0
|
33,325
| 27,386,395,137
|
IssuesEvent
|
2023-02-28 13:36:53
|
centerofci/mathesar
|
https://api.github.com/repos/centerofci/mathesar
|
opened
|
Steps to help user when raw.githubusercontent.com is blocked (It's blocked by certain network providers in India)
|
type: enhancement work: infrastructure status: ready restricted: maintainers
|
## Description
* raw.githubusercontent.com is geo-blocked by certain network providers in India. Requests to the domain get timed out.
* We use this content domain in our quick start command and also inside install.sh to download the docker-compose.yml file.
## Steps to take
1. We should update our quickstart command to be able to throw some error or warning if the request fails. Currently, it exits without printing anything when an error occurs, and the user has no idea what's going on.
1. We should update our docs or have an open issue (maybe this one), so that users facing this problem will try any workarounds posted. The domain is geo-blocked, some providers block it differently from others by just blocking it at the DNS level. Using a VPN should work for all cases.
1. We should try to host these 2 files separately in our own domain or with a different content provider, if possible.
|
1.0
|
Steps to help user when raw.githubusercontent.com is blocked (It's blocked by certain network providers in India) - ## Description
* raw.githubusercontent.com is geo-blocked by certain network providers in India. Requests to the domain get timed out.
* We use this content domain in our quick start command and also inside install.sh to download the docker-compose.yml file.
## Steps to take
1. We should update our quickstart command to be able to throw some error or warning if the request fails. Currently, it exits without printing anything when an error occurs, and the user has no idea what's going on.
1. We should update our docs or have an open issue (maybe this one), so that users facing this problem will try any workarounds posted. The domain is geo-blocked, some providers block it differently from others by just blocking it at the DNS level. Using a VPN should work for all cases.
1. We should try to host these 2 files separately in our own domain or with a different content provider, if possible.
|
non_process
|
steps to help user when raw githubusercontent com is blocked it s blocked by certain network providers in india description raw githubusercontent com is geo blocked by certain network providers in india requests to the domain get timed out we use this content domain in our quick start command and also inside install sh to download the docker compose yml file steps to take we should update our quickstart command to be able to throw some error or warning if the request fails currently it exits without printing anything when an error occurs and the user has no idea what s going on we should update our docs or have an open issue maybe this one so that users facing this problem will try any workarounds posted the domain is geo blocked some providers block it differently from others by just blocking it at the dns level using a vpn should work for all cases we should try to host these files separately in our own domain or with a different content provider if possible
| 0
|
20,684
| 27,355,922,525
|
IssuesEvent
|
2023-02-27 12:52:24
|
EBIvariation/eva-opentargets
|
https://api.github.com/repos/EBIvariation/eva-opentargets
|
opened
|
Manual curation for 2022.04 release
|
Processing
|
Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/tree/master/docs/manual-curation) for full description of steps.
**Checklist:**
- [ ] Step 1 — Process
- [ ] Step 2 — Curate
- [ ] Curation
- [ ] Review 1
- [ ] Review 2
- [ ] Step 3 — Export
- [ ] Step 4 — EFO feedback
|
1.0
|
Manual curation for 2022.04 release - Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/tree/master/docs/manual-curation) for full description of steps.
**Checklist:**
- [ ] Step 1 — Process
- [ ] Step 2 — Curate
- [ ] Curation
- [ ] Review 1
- [ ] Review 2
- [ ] Step 3 — Export
- [ ] Step 4 — EFO feedback
|
process
|
manual curation for release refer to for full description of steps checklist step — process step — curate curation review review step — export step — efo feedback
| 1
|
196,599
| 14,881,153,389
|
IssuesEvent
|
2021-01-20 10:07:38
|
IntellectualSites/FastAsyncWorldEdit
|
https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit
|
closed
|
#Biome and $ pattern are not working
|
Requires Testing
|
<!-- ⚠️⚠️ Do Not Delete This! You must follow this template. ⚠️⚠️ -->
<!--- Incomplete reports will be marked as invalid, and closed, with few exceptions.-->
<!--- If you are using 1.14 or 1.15 consider updating to 1.16.3 before raising an issue -->
<!--- The priority lays on 1.16 right now, so issues reported for or 1.15 will be fixed for the 1.16 versions -->
**[REQUIRED] FastAsyncWorldEdit Configuration Files**
<!--- Issue /fawe debugpaste in game or in your console and copy the supplied URL here -->
<!--- If you cannot perform the above, we require logs/latest.log; config.yml and config-legacy.yml -->
<!--- Please provide this information by using a paste service such as https://haste.athion.net -->
<!--- If you are unwilling to supply the information we need, we reserve the right to not assist you. Redact IP addresses if you need to. -->
**/fawe debugpaste**: https://athion.net/ISPaster/paste/view/eeaf26724fa34703b46dcd3cc50c2084
**Required Information**
- FAWE Version Number (`/version FastAsyncWorldEdit`): Bukkit-Official(1.16-560;6895fe3)
- Spigot/Paper Version Number (`/version`): git-Paper-434
- Minecraft Version: [e.g. 1.16.5] 1.16.5
**Describe the bug**
The #biome and thereby the $ pattern is not working correctly
**To Reproduce**
Steps to reproduce the behavior:
1. select a brush (surface, sphere) with the $ or #biome pattern
2. try to use the brush
3. see the error in the console
**Plugins being used on the server**
<!--- Optional but recommended - issue "/plugins" in-game or in console and copy/paste the list -->
Arceon, Builders-Utilities, BungeeTabListPlus, Essentials, FastAsyncWorldEdit (WorldEdit), goBrush, goPaint, HeadDatabase, Hyperverse, LegendEssentials, LegendSchematicScatter, LuckPerms, Vault, VoidGenerator, VoxelSniper, WorldGuard
**Checklist**:
<!--- Make sure you've completed the following steps (put an "X" between of brackets): -->
- [X] I included all information required in the sections above
- [X] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue)
- [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.5](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/)
- [X] I made sure the bug/error is not caused by any other plugin
|
1.0
|
#Biome and $ pattern are not working - <!-- ⚠️⚠️ Do Not Delete This! You must follow this template. ⚠️⚠️ -->
<!--- Incomplete reports will be marked as invalid, and closed, with few exceptions.-->
<!--- If you are using 1.14 or 1.15 consider updating to 1.16.3 before raising an issue -->
<!--- The priority lays on 1.16 right now, so issues reported for or 1.15 will be fixed for the 1.16 versions -->
**[REQUIRED] FastAsyncWorldEdit Configuration Files**
<!--- Issue /fawe debugpaste in game or in your console and copy the supplied URL here -->
<!--- If you cannot perform the above, we require logs/latest.log; config.yml and config-legacy.yml -->
<!--- Please provide this information by using a paste service such as https://haste.athion.net -->
<!--- If you are unwilling to supply the information we need, we reserve the right to not assist you. Redact IP addresses if you need to. -->
**/fawe debugpaste**: https://athion.net/ISPaster/paste/view/eeaf26724fa34703b46dcd3cc50c2084
**Required Information**
- FAWE Version Number (`/version FastAsyncWorldEdit`): Bukkit-Official(1.16-560;6895fe3)
- Spigot/Paper Version Number (`/version`): git-Paper-434
- Minecraft Version: [e.g. 1.16.5] 1.16.5
**Describe the bug**
The #biome and thereby the $ pattern is not working correctly
**To Reproduce**
Steps to reproduce the behavior:
1. select a brush (surface, sphere) with the $ or #biome pattern
2. try to use the brush
3. see the error in the console
**Plugins being used on the server**
<!--- Optional but recommended - issue "/plugins" in-game or in console and copy/paste the list -->
Arceon, Builders-Utilities, BungeeTabListPlus, Essentials, FastAsyncWorldEdit (WorldEdit), goBrush, goPaint, HeadDatabase, Hyperverse, LegendEssentials, LegendSchematicScatter, LuckPerms, Vault, VoidGenerator, VoxelSniper, WorldGuard
**Checklist**:
<!--- Make sure you've completed the following steps (put an "X" between of brackets): -->
- [X] I included all information required in the sections above
- [X] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue)
- [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.5](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/)
- [X] I made sure the bug/error is not caused by any other plugin
|
non_process
|
biome and pattern are not working fastasyncworldedit configuration files fawe debugpaste required information fawe version number version fastasyncworldedit bukkit official spigot paper version number version git paper minecraft version describe the bug the biome and thereby the pattern is not working correctly to reproduce steps to reproduce the behavior select a brush surface sphere with the or biome pattern try to use the brush see the error in the console plugins being used on the server arceon builders utilities bungeetablistplus essentials fastasyncworldedit worldedit gobrush gopaint headdatabase hyperverse legendessentials legendschematicscatter luckperms vault voidgenerator voxelsniper worldguard checklist i included all information required in the sections above i made sure there are no duplicates of this report i made sure i am using an up to date version of i made sure the bug error is not caused by any other plugin
| 0
|
8,394
| 11,564,946,679
|
IssuesEvent
|
2020-02-20 09:37:41
|
google/go-jsonnet
|
https://api.github.com/repos/google/go-jsonnet
|
closed
|
`setup.py test` is deprecated
|
process
|
Part of tests.sh output:
```
+ python setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
```
We should use something else then, I guess.
|
1.0
|
`setup.py test` is deprecated - Part of tests.sh output:
```
+ python setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
```
We should use something else then, I guess.
|
process
|
setup py test is deprecated part of tests sh output python setup py test running test warning testing via this command is deprecated and will be removed in a future version users looking for a generic test entry point independent of test runner are encouraged to use tox we should use something else then i guess
| 1
|
135,128
| 5,242,647,060
|
IssuesEvent
|
2017-01-31 18:37:17
|
projectcalico/felix
|
https://api.github.com/repos/projectcalico/felix
|
closed
|
make release doesn't work on a mac
|
priority/P3
|
Make release fails on a mac for at least this reason:
```
# Check that the executable is correctly statically linked.
ldd bin/calico-felix | grep -q "not a dynamic executable"
/bin/sh: ldd: command not found
```
We should be containerizing all the things.
|
1.0
|
make release doesn't work on a mac - Make release fails on a mac for at least this reason:
```
# Check that the executable is correctly statically linked.
ldd bin/calico-felix | grep -q "not a dynamic executable"
/bin/sh: ldd: command not found
```
We should be containerizing all the things.
|
non_process
|
make release doesn t work on a mac make release fails on a mac for at least this reason check that the executable is correctly statically linked ldd bin calico felix grep q not a dynamic executable bin sh ldd command not found we should be containerizing all the things
| 0
|
18,847
| 24,760,905,423
|
IssuesEvent
|
2022-10-22 00:03:04
|
prometheus-community/windows_exporter
|
https://api.github.com/repos/prometheus-community/windows_exporter
|
closed
|
Process collector serves different results on two near-identical servers
|
collector/process
|
I have two servers:
- Server A: Windows Server 2016, OS Build 14393.5291
- Server B: Windows Server 2016, OS Build 14393.5192
Both servers run the same IIS version and have windows_exporter 0.20 installed. I use the following config file on both installations:
```yaml
collectors:
enabled: cpu,os,cs,process,system,net,time,iis,memory,mssql
collector:
process:
whitelist: (w3wp|sqlservr).*
log:
level: warn
```
On server A, my metrics look like this:
```
...
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="16340"} 1.665100801756524e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="17080"} 1.665133994863656e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="1756"} 1.6651348187074437e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="17732"} 1.6651008015363705e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="19004"} 1.6651008015409024e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="20696"} 1.6651340591289651e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="21120"} 1.6651008016711247e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="23896"} 1.6651008016025145e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="26168"} 1.6651236574035077e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="27448"} 1.6651008016065788e+09
windows_process_start_time{creating_process_id="616",process="sqlservr",process_id="3140"} 1.6606089232324252e+09
...
```
On server B, they look like this:
```
...
windows_process_start_time{creating_process_id="2408",process="w3wp_Azure DevOps Server Application Pool",process_id="3316"} 1.66375170106868e+09
windows_process_start_time{creating_process_id="2408",process="w3wp_Grafana (rewrite)",process_id="14936"} 1.6651420802270162e+09
windows_process_start_time{creating_process_id="2408",process="w3wp_Prometheus (rewrite)",process_id="7236"} 1.6651423666179688e+09
windows_process_start_time{creating_process_id="712",process="sqlservr",process_id="3552"} 1.6577795374893532e+09
...
```
Server B for some reason also tells me the user (appPool name) that is running the w3wp.exe process, which is great and preferable, because it allows me to distinguish the process in Grafana. Why does server A completely omit this? Am I missing a setting somewhere?
|
1.0
|
Process collector serves different results on two near-identical servers - I have two servers:
- Server A: Windows Server 2016, OS Build 14393.5291
- Server B: Windows Server 2016, OS Build 14393.5192
Both servers run the same IIS version and have windows_exporter 0.20 installed. I use the following config file on both installations:
```yaml
collectors:
enabled: cpu,os,cs,process,system,net,time,iis,memory,mssql
collector:
process:
whitelist: (w3wp|sqlservr).*
log:
level: warn
```
On server A, my metrics look like this:
```
...
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="16340"} 1.665100801756524e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="17080"} 1.665133994863656e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="1756"} 1.6651348187074437e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="17732"} 1.6651008015363705e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="19004"} 1.6651008015409024e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="20696"} 1.6651340591289651e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="21120"} 1.6651008016711247e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="23896"} 1.6651008016025145e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="26168"} 1.6651236574035077e+09
windows_process_start_time{creating_process_id="2196",process="w3wp",process_id="27448"} 1.6651008016065788e+09
windows_process_start_time{creating_process_id="616",process="sqlservr",process_id="3140"} 1.6606089232324252e+09
...
```
On server B, they look like this:
```
...
windows_process_start_time{creating_process_id="2408",process="w3wp_Azure DevOps Server Application Pool",process_id="3316"} 1.66375170106868e+09
windows_process_start_time{creating_process_id="2408",process="w3wp_Grafana (rewrite)",process_id="14936"} 1.6651420802270162e+09
windows_process_start_time{creating_process_id="2408",process="w3wp_Prometheus (rewrite)",process_id="7236"} 1.6651423666179688e+09
windows_process_start_time{creating_process_id="712",process="sqlservr",process_id="3552"} 1.6577795374893532e+09
...
```
Server B for some reason also tells me the user (appPool name) that is running the w3wp.exe process, which is great and preferable, because it allows me to distinguish the process in Grafana. Why does server A completely omit this? Am I missing a setting somewhere?
|
process
|
process collector serves different results on two near identical servers i have two servers server a windows server os build server b windows server os build both servers run the same iis version and have windows exporter installed i use the following config file on both installations yaml collectors enabled cpu os cs process system net time iis memory mssql collector process whitelist sqlservr log level warn on server a my metrics look like this windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process process id windows process start time creating process id process sqlservr process id on server b they look like this windows process start time creating process id process azure devops server application pool process id windows process start time creating process id process grafana rewrite process id windows process start time creating process id process prometheus rewrite process id windows process start time creating process id process sqlservr process id server b for some reason also tells me the user apppool name that is running the exe process which is great and preferable because it allows me to distinguish the process in grafana why does server a completely omit this am i missing a setting somewhere
| 1
|
253,130
| 27,300,441,875
|
IssuesEvent
|
2023-02-24 01:09:26
|
panasalap/linux-4.19.72_1
|
https://api.github.com/repos/panasalap/linux-4.19.72_1
|
opened
|
CVE-2020-16119 (High) detected in linux-yoctov5.4.51
|
security vulnerability
|
## CVE-2020-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Use-after-free vulnerability in the Linux kernel exploitable by a local attacker due to reuse of a DCCP socket with an attached dccps_hc_tx_ccid object as a listener after being released. Fixed in Ubuntu Linux kernel 5.4.0-51.56, 5.3.0-68.63, 4.15.0-121.123, 4.4.0-193.224, 3.13.0.182.191 and 3.2.0-149.196.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-16119>CVE-2020-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-16119">https://nvd.nist.gov/vuln/detail/CVE-2020-16119</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-16119 (High) detected in linux-yoctov5.4.51 - ## CVE-2020-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Use-after-free vulnerability in the Linux kernel exploitable by a local attacker due to reuse of a DCCP socket with an attached dccps_hc_tx_ccid object as a listener after being released. Fixed in Ubuntu Linux kernel 5.4.0-51.56, 5.3.0-68.63, 4.15.0-121.123, 4.4.0-193.224, 3.13.0.182.191 and 3.2.0-149.196.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-16119>CVE-2020-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-16119">https://nvd.nist.gov/vuln/detail/CVE-2020-16119</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files net dccp minisocks c net dccp minisocks c vulnerability details use after free vulnerability in the linux kernel exploitable by a local attacker due to reuse of a dccp socket with an attached dccps hc tx ccid object as a listener after being released fixed in ubuntu linux kernel and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
450
| 2,891,315,048
|
IssuesEvent
|
2015-06-15 03:25:50
|
vishal-uttamchandani/refactoring
|
https://api.github.com/repos/vishal-uttamchandani/refactoring
|
opened
|
Automate azure
|
Process improvement
|
Automate the following:
- Creation of console website
- Creation of api website (and application settings)
- Creation of documentdb, database, collections and udf (IsDefinedOrNotNull)
- Creation of redis cache
|
1.0
|
Automate azure - Automate the following:
- Creation of console website
- Creation of api website (and application settings)
- Creation of documentdb, database, collections and udf (IsDefinedOrNotNull)
- Creation of redis cache
|
process
|
automate azure automate the following creation of console website creation of api website and application settings creation of documentdb database collections and udf isdefinedornotnull creation of redis cache
| 1
|
58,621
| 7,166,160,125
|
IssuesEvent
|
2018-01-29 16:25:51
|
studiomotio/micro-motion
|
https://api.github.com/repos/studiomotio/micro-motion
|
opened
|
Demo site
|
design
|
Build the **micro-motion** demo site:
- [ ] photoshop template, with @shadox
- [ ] demo site, build at https://studiomotio.github.io/micro-motion
|
1.0
|
Demo site - Build the **micro-motion** demo site:
- [ ] photoshop template, with @shadox
- [ ] demo site, build at https://studiomotio.github.io/micro-motion
|
non_process
|
demo site build the micro motion demo site photoshop template with shadox demo site build at
| 0
|
18,584
| 24,566,022,978
|
IssuesEvent
|
2022-10-13 03:09:00
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
postprocessing fail: dependency loop with pushbutton
|
mod:pypostprocessing postprocess-fail compatibility
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?

### Steps to reproduce
1. Enable ByAE Beta from gdrive
2. Install pushbutton
3. Restart
### Additional context
_No response_
### Log file
_No response_
|
2.0
|
postprocessing fail: dependency loop with pushbutton - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?

### Steps to reproduce
1. Enable ByAE Beta from gdrive
2. Install pushbutton
3. Restart
### Additional context
_No response_
### Log file
_No response_
|
process
|
postprocessing fail dependency loop with pushbutton mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem steps to reproduce enable byae beta from gdrive install pushbutton restart additional context no response log file no response
| 1
|
16,982
| 22,342,502,403
|
IssuesEvent
|
2022-06-15 03:19:12
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[SQL Connector] Catalog Refactor
|
compute/data-processing type/feature
|
Refactor the catalog design and test the PulsarCatalog with the new SQL Connector
|
1.0
|
[SQL Connector] Catalog Refactor - Refactor the catalog design and test the PulsarCatalog with the new SQL Connector
|
process
|
catalog refactor refactor the catalog design and test the pulsarcatalog with the new sql connector
| 1
|
1,112
| 3,588,388,878
|
IssuesEvent
|
2016-01-31 00:05:44
|
osresearch/vst
|
https://api.github.com/repos/osresearch/vst
|
closed
|
use Processing transformation matrix
|
processing
|
Let the user call `translate()` and `rotate()` in Processing to adjust the position and orientation of the vectors.
|
1.0
|
use Processing transformation matrix - Let the user call `translate()` and `rotate()` in Processing to adjust the position and orientation of the vectors.
|
process
|
use processing transformation matrix let the user call translate and rotate in processing to adjust the position and orientation of the vectors
| 1
|
311,858
| 23,407,755,907
|
IssuesEvent
|
2022-08-12 14:24:22
|
kommitters/serverless-contact-email
|
https://api.github.com/repos/kommitters/serverless-contact-email
|
closed
|
Update repository documentation
|
📖 Documentation
|
ℹ️ This issue is part of Epic #23
## Objective
Update and add documentation to set guidelines for repository contributors, their respective code of conduct, and use license.
|
1.0
|
Update repository documentation - ℹ️ This issue is part of Epic #23
## Objective
Update and add documentation to set guidelines for repository contributors, their respective code of conduct, and use license.
|
non_process
|
update repository documentation ℹ️ this issue is part of epic objective update and add documentation to set guidelines for repository contributors their respective code of conduct and use license
| 0
|
659,372
| 21,923,653,376
|
IssuesEvent
|
2022-05-22 23:31:12
|
GrapheneOS/Camera
|
https://api.github.com/repos/GrapheneOS/Camera
|
closed
|
QR scan preview is misaligned with the QR scanning square overlay
|
bug priority-high
|
For the image analysis, we're using the center of the image, so the scanning square doesn't properly correspond to the region that's being used. We need to make sure the focus is being correctly aligned with the actual scanning square too.
|
1.0
|
QR scan preview is misaligned with the QR scanning square overlay - For the image analysis, we're using the center of the image, so the scanning square doesn't properly correspond to the region that's being used. We need to make sure the focus is being correctly aligned with the actual scanning square too.
|
non_process
|
qr scan preview is misaligned with the qr scanning square overlay for the image analysis we re using the center of the image so the scanning square doesn t properly correspond to the region that s being used we need to make sure the focus is being correctly aligned with the actual scanning square too
| 0
|
24,170
| 11,009,655,994
|
IssuesEvent
|
2019-12-04 13:08:04
|
directoryxx/Inventory-SISI
|
https://api.github.com/repos/directoryxx/Inventory-SISI
|
opened
|
CVE-2015-8861 (Medium) detected in handlebars-1.0.12.tgz
|
security vulnerability
|
## CVE-2015-8861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.0.12.tgz</b></p></summary>
<p>Extension of the Mustache logicless template language</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Inventory-SISI/assets/adminlte/bower_components/morris.js/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Inventory-SISI/assets/adminlte/bower_components/morris.js/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- bower-1.2.8.tgz (Root Library)
- :x: **handlebars-1.0.12.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/directoryxx/Inventory-SISI/commit/7b48ea6a62895408dfb3f1fd18d7d7cb70464d46">7b48ea6a62895408dfb3f1fd18d7d7cb70464d46</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The handlebars package before 4.0.0 for Node.js allows remote attackers to conduct cross-site scripting (XSS) attacks by leveraging a template with an attribute that is not quoted.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8861>CVE-2015-8861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/61">https://www.npmjs.com/advisories/61</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-8861 (Medium) detected in handlebars-1.0.12.tgz - ## CVE-2015-8861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.0.12.tgz</b></p></summary>
<p>Extension of the Mustache logicless template language</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Inventory-SISI/assets/adminlte/bower_components/morris.js/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Inventory-SISI/assets/adminlte/bower_components/morris.js/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- bower-1.2.8.tgz (Root Library)
- :x: **handlebars-1.0.12.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/directoryxx/Inventory-SISI/commit/7b48ea6a62895408dfb3f1fd18d7d7cb70464d46">7b48ea6a62895408dfb3f1fd18d7d7cb70464d46</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The handlebars package before 4.0.0 for Node.js allows remote attackers to conduct cross-site scripting (XSS) attacks by leveraging a template with an attribute that is not quoted.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8861>CVE-2015-8861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/61">https://www.npmjs.com/advisories/61</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in handlebars tgz cve medium severity vulnerability vulnerable library handlebars tgz extension of the mustache logicless template language library home page a href path to dependency file tmp ws scm inventory sisi assets adminlte bower components morris js package json path to vulnerable library tmp ws scm inventory sisi assets adminlte bower components morris js node modules handlebars package json dependency hierarchy bower tgz root library x handlebars tgz vulnerable library found in head commit a href vulnerability details the handlebars package before for node js allows remote attackers to conduct cross site scripting xss attacks by leveraging a template with an attribute that is not quoted publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
19,320
| 25,470,096,812
|
IssuesEvent
|
2022-11-25 09:25:37
|
NEARWEEK/CORE
|
https://api.github.com/repos/NEARWEEK/CORE
|
closed
|
Create staking battle plan
|
Process
|
## 🎉 Subtasks
- [x] Create strategy for staking node
- [x] Map all top tiers partners we need to reach out to
## 🤼♂️ Reviewer
@Kisgus
|
1.0
|
Create staking battle plan - ## 🎉 Subtasks
- [x] Create strategy for staking node
- [x] Map all top tiers partners we need to reach out to
## 🤼♂️ Reviewer
@Kisgus
|
process
|
create staking battle plan 🎉 subtasks create strategy for staking node map all top tiers partners we need to reach out to 🤼♂️ reviewer kisgus
| 1
|
231,097
| 18,738,450,222
|
IssuesEvent
|
2021-11-04 10:40:01
|
Oldes/Rebol-issues
|
https://api.github.com/repos/Oldes/Rebol-issues
|
closed
|
Time overflow/underflow on date literals is not always correctly computed
|
Test.written Type.bug Datatype: date! CC.resolved
|
_Submitted by:_ **meijeru**
Time overflow/underflow is correctly handled when date variables have their time component set to a value outside 0:00 .. 23:59.
When specifying times outside these limits in a date literal, the situation is as follows:
``` rebol
- positive time values > 23:59 are consistently refused
- all negative time values are accepted, but no underflow is computed unless non-zero time-zone is specified (!)
```
``` rebol
>> 3-Jan-2010/30:00
** Syntax error: invalid "date" -- "3-Jan-2010/30:00"
>> 3-Jan-2010/30:00+1:0
** Syntax error: invalid "date" -- "3-Jan-2010/30:00+1:0"
>> 3-Jan-2010/-30:00
== 3-Jan-2010/-30:00
>> 3-Jan-2010/-30:00+1:0
== 1-Jan-2010/18:00+1:00
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1413)** [ Version: alpha 95 Type: Bug Platform: All Category: Datatype Reproduce: Always Fixed-in:alpha 97 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1413</sup>
Comments:
---
> **Rebolbot** commented on Feb 4, 2010:
_Submitted by:_ **Carl**
In DATE! form, negative time is now an error.
---
> **Rebolbot** added the **Type.bug** on Jan 12, 2016
---
|
1.0
|
Time overflow/underflow on date literals is not always correctly computed - _Submitted by:_ **meijeru**
Time overflow/underflow is correctly handled when date variables have their time component set to a value outside 0:00 .. 23:59.
When specifying times outside these limits in a date literal, the situation is as follows:
``` rebol
- positive time values > 23:59 are consistently refused
- all negative time values are accepted, but no underflow is computed unless non-zero time-zone is specified (!)
```
``` rebol
>> 3-Jan-2010/30:00
** Syntax error: invalid "date" -- "3-Jan-2010/30:00"
>> 3-Jan-2010/30:00+1:0
** Syntax error: invalid "date" -- "3-Jan-2010/30:00+1:0"
>> 3-Jan-2010/-30:00
== 3-Jan-2010/-30:00
>> 3-Jan-2010/-30:00+1:0
== 1-Jan-2010/18:00+1:00
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1413)** [ Version: alpha 95 Type: Bug Platform: All Category: Datatype Reproduce: Always Fixed-in:alpha 97 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1413</sup>
Comments:
---
> **Rebolbot** commented on Feb 4, 2010:
_Submitted by:_ **Carl**
In DATE! form, negative time is now an error.
---
> **Rebolbot** added the **Type.bug** on Jan 12, 2016
---
|
non_process
|
time overflow underflow on date literals is not always correctly computed submitted by meijeru time overflow underflow is correctly handled when date variables have their time component set to a value outside when specifying times outside these limits in a date literal the situation is as follows rebol positive time values are consistently refused all negative time values are accepted but no underflow is computed unless non zero time zone is specified rebol jan syntax error invalid date jan jan syntax error invalid date jan jan jan jan jan imported from imported from comments rebolbot commented on feb submitted by carl in date form negative time is now an error rebolbot added the type bug on jan
| 0
|
185,635
| 21,801,726,160
|
IssuesEvent
|
2022-05-16 06:20:40
|
ws-sultan/CompleteFoundVulnerabilitiesMarkdown
|
https://api.github.com/repos/ws-sultan/CompleteFoundVulnerabilitiesMarkdown
|
opened
|
express-session-1.13.0.tgz: 1 vulnerabilities (highest severity is: 9.1)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-session-1.13.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/base64-url/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/CompleteFoundVulnerabilitiesMarkdown/commit/3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d">3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [WS-2018-0111](https://hackerone.com/reports/321692) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | base64-url-1.2.1.tgz | Transitive | 1.14.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2018-0111</summary>
### Vulnerable Library - <b>base64-url-1.2.1.tgz</b></p>
<p>Base64 encode, decode, escape and unescape for URL applications</p>
<p>Library home page: <a href="https://registry.npmjs.org/base64-url/-/base64-url-1.2.1.tgz">https://registry.npmjs.org/base64-url/-/base64-url-1.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/base64-url/package.json</p>
<p>
Dependency Hierarchy:
- express-session-1.13.0.tgz (Root Library)
- uid-safe-2.0.0.tgz
- :x: **base64-url-1.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/CompleteFoundVulnerabilitiesMarkdown/commit/3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d">3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Versions of base64-url before 2.0.0 are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input.
<p>Publish Date: 2018-05-16
<p>URL: <a href=https://hackerone.com/reports/321692>WS-2018-0111</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/660">https://nodesecurity.io/advisories/660</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution (base64-url): 2.0.0</p>
<p>Direct dependency fix Resolution (express-session): 1.14.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"express-session","packageVersion":"1.13.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"express-session:1.13.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.14.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"WS-2018-0111","vulnerabilityDetails":"Versions of base64-url before 2.0.0 are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input.","vulnerabilityUrl":"https://hackerone.com/reports/321692","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
True
|
express-session-1.13.0.tgz: 1 vulnerabilities (highest severity is: 9.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-session-1.13.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/base64-url/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/CompleteFoundVulnerabilitiesMarkdown/commit/3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d">3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [WS-2018-0111](https://hackerone.com/reports/321692) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | base64-url-1.2.1.tgz | Transitive | 1.14.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2018-0111</summary>
### Vulnerable Library - <b>base64-url-1.2.1.tgz</b></p>
<p>Base64 encode, decode, escape and unescape for URL applications</p>
<p>Library home page: <a href="https://registry.npmjs.org/base64-url/-/base64-url-1.2.1.tgz">https://registry.npmjs.org/base64-url/-/base64-url-1.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/base64-url/package.json</p>
<p>
Dependency Hierarchy:
- express-session-1.13.0.tgz (Root Library)
- uid-safe-2.0.0.tgz
- :x: **base64-url-1.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ws-sultan/CompleteFoundVulnerabilitiesMarkdown/commit/3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d">3ae42eb7b49ca7b63298116d58e642fbf7cc1d2d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Versions of base64-url before 2.0.0 are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input.
<p>Publish Date: 2018-05-16
<p>URL: <a href=https://hackerone.com/reports/321692>WS-2018-0111</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/660">https://nodesecurity.io/advisories/660</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution (base64-url): 2.0.0</p>
<p>Direct dependency fix Resolution (express-session): 1.14.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"express-session","packageVersion":"1.13.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"express-session:1.13.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.14.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"WS-2018-0111","vulnerabilityDetails":"Versions of base64-url before 2.0.0 are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input.","vulnerabilityUrl":"https://hackerone.com/reports/321692","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
|
non_process
|
express session tgz vulnerabilities highest severity is vulnerable library express session tgz path to dependency file package json path to vulnerable library node modules url package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high url tgz transitive details ws vulnerable library url tgz encode decode escape and unescape for url applications library home page a href path to dependency file package json path to vulnerable library node modules url package json dependency hierarchy express session tgz root library uid safe tgz x url tgz vulnerable library found in head commit a href found in base branch main vulnerability details versions of url before are vulnerable to out of bounds read as it allocates uninitialized buffers when number is passed in input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url direct dependency fix resolution express session step up your open source security game with whitesource istransitivedependency false dependencytree express session isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails versions of url before are vulnerable to out of bounds read as it allocates uninitialized buffers when number is passed in input vulnerabilityurl
| 0
|
497
| 2,941,328,853
|
IssuesEvent
|
2015-07-02 07:02:57
|
PHPOffice/PHPWord
|
https://api.github.com/repos/PHPOffice/PHPWord
|
reopened
|
[Solved] Impossible to open /results/Sample_07_TemplateCloneRow.docx (windows only)
|
Bug Report Open XML (Word2007) Template Processor
|
Hello,
I have a problem with templates, using a WAMP server.
I don't have any issue with the same process with a LAMP server.
It's a LOCAL application, NOT on the internet.
Word documents have been tested with LibreOffice and OpenOffice, same results.
I'm working with the Sample_07_TemplateCloneRow example.
With Linux :
It works perfectly,
I can open the file via http://...., but it is a "read only" version (that's fine, but NOT what I need)
I can open, read and modify the "/results/Sample_07_TemplateCloneRow.docx" file in his folder (direct access in the folder)
With Windows 7 (I tried with 2 different computers, same issue)
I can open the file via http://, it works but it's NOT what I need !
I can save the document in my "downloads" folder. After that, I can open the new document but it is NOT what I need neither !
I CANNOT open the "c:/[...]/results/Sample_07_TemplateCloneRow.docx", the file is "corrupted" (access denied) but it IS this one I need to open !
After many, many, many tests, I found a way to open those files :
right-click on the file -> Properties -> Security -> Modify -> Add a user -> I added my windows "username" (my OS is in french, i'm not quite sure with the english labels !)
and then, it works, I can open my "c:/[...]/results/Sample_07_TemplateCloneRow.docx"
It works, but it's really not an easy way for a daily use.
Templates (07 & 23) are the only ones who makes troubles.
IE, "/results/Sample_06_Footnote.docx" didn't need any manipulation before beeing open and have no owner neither.
I have no problem with other generated files with or without PHPWord, only with those using TemplateProcessor.
I do not understand what's the problem and think a little bit ironic to have rights/permissions issues with windows...
Has someone a solution with this issue ?
How can I solve it ?
Without this problem, PHPWord is awesome !
Windows 7 64bits
Wamp Server 2.5
Apache 2.4.9 (Win 32)
php 5.5.12
PHPWord Master 0.12.0
|
1.0
|
[Solved] Impossible to open /results/Sample_07_TemplateCloneRow.docx (windows only) - Hello,
I have a problem with templates, using a WAMP server.
I don't have any issue with the same process with a LAMP server.
It's a LOCAL application, NOT on the internet.
Word documents have been tested with LibreOffice and OpenOffice, same results.
I'm working with the Sample_07_TemplateCloneRow example.
With Linux :
It works perfectly,
I can open the file via http://...., but it is a "read only" version (that's fine, but NOT what I need)
I can open, read and modify the "/results/Sample_07_TemplateCloneRow.docx" file in his folder (direct access in the folder)
With Windows 7 (I tried with 2 different computers, same issue)
I can open the file via http://, it works but it's NOT what I need !
I can save the document in my "downloads" folder. After that, I can open the new document but it is NOT what I need neither !
I CANNOT open the "c:/[...]/results/Sample_07_TemplateCloneRow.docx", the file is "corrupted" (access denied) but it IS this one I need to open !
After many, many, many tests, I found a way to open those files :
right-click on the file -> Properties -> Security -> Modify -> Add a user -> I added my windows "username" (my OS is in french, i'm not quite sure with the english labels !)
and then, it works, I can open my "c:/[...]/results/Sample_07_TemplateCloneRow.docx"
It works, but it's really not an easy way for a daily use.
Templates (07 & 23) are the only ones who makes troubles.
IE, "/results/Sample_06_Footnote.docx" didn't need any manipulation before beeing open and have no owner neither.
I have no problem with other generated files with or without PHPWord, only with those using TemplateProcessor.
I do not understand what's the problem and think a little bit ironic to have rights/permissions issues with windows...
Has someone a solution with this issue ?
How can I solve it ?
Without this problem, PHPWord is awesome !
Windows 7 64bits
Wamp Server 2.5
Apache 2.4.9 (Win 32)
php 5.5.12
PHPWord Master 0.12.0
|
process
|
impossible to open results sample templateclonerow docx windows only hello i have a problem with templates using a wamp server i don t have any issue with the same process with a lamp server it s a local application not on the internet word documents have been tested with libreoffice and openoffice same results i m working with the sample templateclonerow example with linux it works perfectly i can open the file via but it is a read only version that s fine but not what i need i can open read and modify the results sample templateclonerow docx file in his folder direct access in the folder with windows i tried with different computers same issue i can open the file via it works but it s not what i need i can save the document in my downloads folder after that i can open the new document but it is not what i need neither i cannot open the c results sample templateclonerow docx the file is corrupted access denied but it is this one i need to open after many many many tests i found a way to open those files right click on the file properties security modify add a user i added my windows username my os is in french i m not quite sure with the english labels and then it works i can open my c results sample templateclonerow docx it works but it s really not an easy way for a daily use templates are the only ones who makes troubles ie results sample footnote docx didn t need any manipulation before beeing open and have no owner neither i have no problem with other generated files with or without phpword only with those using templateprocessor i do not understand what s the problem and think a little bit ironic to have rights permissions issues with windows has someone a solution with this issue how can i solve it without this problem phpword is awesome windows wamp server apache win php phpword master
| 1
|
5,807
| 8,643,541,067
|
IssuesEvent
|
2018-11-25 18:55:20
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Implement trip saving functionality for flights
|
Priority:Very High Process:Implement Requirement
|
At the end of the trip planning, the system must get all the information about the user's trip, and save it in the database. In other words, it must create a Trip entity relative to the user's trip, create flight reservations and reserve the seats the user selected for each reservation
|
1.0
|
Implement trip saving functionality for flights - At the end of the trip planning, the system must get all the information about the user's trip, and save it in the database. In other words, it must create a Trip entity relative to the user's trip, create flight reservations and reserve the seats the user selected for each reservation
|
process
|
implement trip saving functionality for flights at the end of the trip planning the system must get all the information about the user s trip and save it in the database in other words it must create a trip entity relative to the user s trip create flight reservations and reserve the seats the user selected for each reservation
| 1
|
197,340
| 14,919,388,693
|
IssuesEvent
|
2021-01-22 23:59:54
|
rclone/rclone
|
https://api.github.com/repos/rclone/rclone
|
closed
|
Error opening MS Office documents (or their gdoc versions) from the fuse mount of a gdrive remote
|
VFS / mount bug needs retest waiting for reply...
|
Hi,
I am on fedora 26, using both release 1.38 and last git revisions of rclone.
LibreOffice is version 5.3.6.1 (did not try v5.4.x yet, will need to wait fedora 27 for that).
When trying to open MS Office docs from a fuse mount of a gdrive remote, I get the followinf error message from LibreOffice : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/ICS_CNX_EXP-Meeting - 20170920-ORI.docx.`
If trying to open the gsheet version, I get the same error : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/ICS_CNX_EXP-Meeting - 20170920-ORI.docx.docx.`
I have the same problem with vsd files (which should not get converted anyway) : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/Procs/Copy of NET_L3.vsd.`
I can directly open .txt files ; but their gdoc versions get renamed .txt.docx and I get the same error trying to open them : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/Procs/arai2c.txt.docx.`
Precision : I can open these documents if I first make a copy of them from my fuse mount to local folder first.
|
1.0
|
Error opening MS Office documents (or their gdoc versions) from the fuse mount of a gdrive remote - Hi,
I am on fedora 26, using both release 1.38 and last git revisions of rclone.
LibreOffice is version 5.3.6.1 (did not try v5.4.x yet, will need to wait fedora 27 for that).
When trying to open MS Office docs from a fuse mount of a gdrive remote, I get the followinf error message from LibreOffice : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/ICS_CNX_EXP-Meeting - 20170920-ORI.docx.`
If trying to open the gsheet version, I get the same error : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/ICS_CNX_EXP-Meeting - 20170920-ORI.docx.docx.`
I have the same problem with vsd files (which should not get converted anyway) : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/Procs/Copy of NET_L3.vsd.`
I can directly open .txt files ; but their gdoc versions get renamed .txt.docx and I get the same error trying to open them : `General input/output error while accessing /home/sheepdestroyer/Vrac/interactiv/Gdrive-Mount/Procs/arai2c.txt.docx.`
Precision : I can open these documents if I first make a copy of them from my fuse mount to local folder first.
|
non_process
|
error opening ms office documents or their gdoc versions from the fuse mount of a gdrive remote hi i am on fedora using both release and last git revisions of rclone libreoffice is version did not try x yet will need to wait fedora for that when trying to open ms office docs from a fuse mount of a gdrive remote i get the followinf error message from libreoffice general input output error while accessing home sheepdestroyer vrac interactiv gdrive mount ics cnx exp meeting ori docx if trying to open the gsheet version i get the same error general input output error while accessing home sheepdestroyer vrac interactiv gdrive mount ics cnx exp meeting ori docx docx i have the same problem with vsd files which should not get converted anyway general input output error while accessing home sheepdestroyer vrac interactiv gdrive mount procs copy of net vsd i can directly open txt files but their gdoc versions get renamed txt docx and i get the same error trying to open them general input output error while accessing home sheepdestroyer vrac interactiv gdrive mount procs txt docx precision i can open these documents if i first make a copy of them from my fuse mount to local folder first
| 0
|
4,029
| 6,963,180,081
|
IssuesEvent
|
2017-12-08 16:25:51
|
pwittchen/prefser
|
https://api.github.com/repos/pwittchen/prefser
|
closed
|
Release 2.2.0-rx2
|
release process
|
**Initial release notes**:
- Allow to pass custom instance of Gson object - PR #123
- Updated project dependencies
- RxJava -> 2.1.7
- Gson -> 2.8.2
- Support Annotations -> 27.0.2
- AppCompat v7 -> 27.0.2
- Truth -> 0.36
- Mockito Core -> 2.13.0
- Updated Gradle to 3.0
**Things to do**:
- [x] bump library version
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md` after Maven Sync
- [x] create new GitHub release
|
1.0
|
Release 2.2.0-rx2 - **Initial release notes**:
- Allow to pass custom instance of Gson object - PR #123
- Updated project dependencies
- RxJava -> 2.1.7
- Gson -> 2.8.2
- Support Annotations -> 27.0.2
- AppCompat v7 -> 27.0.2
- Truth -> 0.36
- Mockito Core -> 2.13.0
- Updated Gradle to 3.0
**Things to do**:
- [x] bump library version
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md` after Maven Sync
- [x] create new GitHub release
|
process
|
release initial release notes allow to pass custom instance of gson object pr updated project dependencies rxjava gson support annotations appcompat truth mockito core updated gradle to things to do bump library version upload archives to maven central repository close and release artifact on nexus update changelog md after maven sync bump library version in readme md after maven sync create new github release
| 1
|
215,570
| 7,295,359,932
|
IssuesEvent
|
2018-02-26 06:23:28
|
ballerina-lang/ballerina
|
https://api.github.com/repos/ballerina-lang/ballerina
|
closed
|
Allow default values for ref-type struct fields
|
Priority/Highest Type/Improvement
|
Default values are supported only for value-type struct fields. Needs $subject as well
|
1.0
|
Allow default values for ref-type struct fields - Default values are supported only for value-type struct fields. Needs $subject as well
|
non_process
|
allow default values for ref type struct fields default values are supported only for value type struct fields needs subject as well
| 0
|
186,751
| 15,083,158,078
|
IssuesEvent
|
2021-02-05 15:28:56
|
gianlucadetommaso/volatile
|
https://api.github.com/repos/gianlucadetommaso/volatile
|
closed
|
Do you have plan to provide a interact jupyter notebook to dive into the model construction ?
|
documentation
|
The tensorflow official website provide a example about multilevel modeling
https://www.tensorflow.org/probability/examples/Multilevel_Modeling_Primer
it interpret its components by visualize them separately.
I think you should also provide a correspondence about your model.
Let it to validate your model construction and change to adapt their own applications.
Even, if you have a guide about the feature selection and aggregate method
in your model with visualize is more bravo.
|
1.0
|
Do you have plan to provide a interact jupyter notebook to dive into the model construction ? - The tensorflow official website provide a example about multilevel modeling
https://www.tensorflow.org/probability/examples/Multilevel_Modeling_Primer
it interpret its components by visualize them separately.
I think you should also provide a correspondence about your model.
Let it to validate your model construction and change to adapt their own applications.
Even, if you have a guide about the feature selection and aggregate method
in your model with visualize is more bravo.
|
non_process
|
do you have plan to provide a interact jupyter notebook to dive into the model construction the tensorflow official website provide a example about multilevel modeling it interpret its components by visualize them separately i think you should also provide a correspondence about your model let it to validate your model construction and change to adapt their own applications even if you have a guide about the feature selection and aggregate method in your model with visualize is more bravo
| 0
|
8,982
| 12,099,739,046
|
IssuesEvent
|
2020-04-20 12:43:03
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Introspection is recognizing a `DECIMAL(5, 3)` in SQLite as a String
|
bug/2-confirmed kind/bug process/candidate topic: introspection
|
```sql
create table exercises (
id integer primary key not null,
distance decimal(5, 3) not null
);
insert into exercises (distance) values (12.213);
```
When introspecting it will make `distance` a String
|
1.0
|
Introspection is recognizing a `DECIMAL(5, 3)` in SQLite as a String - ```sql
create table exercises (
id integer primary key not null,
distance decimal(5, 3) not null
);
insert into exercises (distance) values (12.213);
```
When introspecting it will make `distance` a String
|
process
|
introspection is recognizing a decimal in sqlite as a string sql create table exercises id integer primary key not null distance decimal not null insert into exercises distance values when introspecting it will make distance a string
| 1
|
20,230
| 26,832,743,876
|
IssuesEvent
|
2023-02-02 17:06:37
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
AutoSync doesn't work with GitHub
|
automation/svc triaged cxp doc-bug process-automation/subsvc Pri2
|
AutoSync to GitHub fails when trying to be turned on using PAT - 'An error occurred while updating the source control'
Manual Sync works
Authenticating with user account then autosync will turn on and work
GitHub PAT set with 'Minimum PAT Permissions for GitHub' as from this article. Are there additional permissions required for autosync to work in GitHub with PAT?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Use source control integration in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
AutoSync doesn't work with GitHub -
AutoSync to GitHub fails when trying to be turned on using PAT - 'An error occurred while updating the source control'
Manual Sync works
Authenticating with user account then autosync will turn on and work
GitHub PAT set with 'Minimum PAT Permissions for GitHub' as from this article. Are there additional permissions required for autosync to work in GitHub with PAT?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Use source control integration in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
autosync doesn t work with github autosync to github fails when trying to be turned on using pat an error occurred while updating the source control manual sync works authenticating with user account then autosync will turn on and work github pat set with minimum pat permissions for github as from this article are there additional permissions required for autosync to work in github with pat document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
18,224
| 24,284,855,830
|
IssuesEvent
|
2022-09-28 20:57:22
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/transform] Add support for toggling scalar value types
|
enhancement good first issue priority:p2 processor/transform
|
**Is your feature request related to a problem? Please describe.**
When collecting from prometheus, a `value_type` is assigned and sometimes incorrectly. Using this processor, it should be possible to toggle from double to int or the other way around in this processor.
**Describe the solution you'd like**
implementation of either the ability to use the set function to change it, or a new function to toggle it for a metric separate from the set function.
|
1.0
|
[processor/transform] Add support for toggling scalar value types - **Is your feature request related to a problem? Please describe.**
When collecting from prometheus, a `value_type` is assigned and sometimes incorrectly. Using this processor, it should be possible to toggle from double to int or the other way around in this processor.
**Describe the solution you'd like**
implementation of either the ability to use the set function to change it, or a new function to toggle it for a metric separate from the set function.
|
process
|
add support for toggling scalar value types is your feature request related to a problem please describe when collecting from prometheus a value type is assigned and sometimes incorrectly using this processor it should be possible to toggle from double to int or the other way around in this processor describe the solution you d like implementation of either the ability to use the set function to change it or a new function to toggle it for a metric separate from the set function
| 1
|
204,174
| 23,218,405,335
|
IssuesEvent
|
2022-08-02 15:51:34
|
kcp-dev/kcp
|
https://api.github.com/repos/kcp-dev/kcp
|
closed
|
💣 – workload labels+finalizers are not fully qualified
|
kind/bug area/transparent-multi-cluster area/security bugzilla/severity-high
|
- label`state.internal.workload.kcp.dev/<workload-cluster-name>`
- finalizer `workload.kcp.dev/syncer-<workload-cluster-name>`
We should generally fully qualify these, and hash the result (see workspace initializer labels as example).
|
True
|
💣 – workload labels+finalizers are not fully qualified - - label`state.internal.workload.kcp.dev/<workload-cluster-name>`
- finalizer `workload.kcp.dev/syncer-<workload-cluster-name>`
We should generally fully qualify these, and hash the result (see workspace initializer labels as example).
|
non_process
|
💣 – workload labels finalizers are not fully qualified label state internal workload kcp dev finalizer workload kcp dev syncer we should generally fully qualify these and hash the result see workspace initializer labels as example
| 0
|
12,867
| 15,254,923,718
|
IssuesEvent
|
2021-02-20 14:03:36
|
ssibongee/freelec-springboot2-webservice
|
https://api.github.com/repos/ssibongee/freelec-springboot2-webservice
|
opened
|
머스테치로 화면 구성하기
|
in processing
|
### 1. 템플릿 엔진
* 템플릿 엔진이란 지정된 템플릿 양식과 데이터가 합쳐져서 HTML 문서를 출력하는 소프트웨어를 의미하며 크게 서버 템플릿 엔진과 클라이언트 템플릿 엔진으로 나누어진다.
* JSP, Freemarker 등을 서버 템플릿 엔진이라고 하며 Vue, React 등이 클라이언트 템플릿 엔진에 해당한다.
* 서버 템플릿 엔진과 클라이언트 템플릿 엔진의 코드가 실행되는 영역에 차이가 있는데 서버 템플릿 엔진을 이용한 화면 생성은 서버에서 자바 코드로 문자열을 만든 뒤 이를 HTML로 변환하여 브라우저로 전달하며 이 때 자바스크립트 코드는 단순 문자열로 취급한다.
* 자바스크립트는 브라우저 위에서 동작하며 자바 스크립트가 실행되는 장소는 서버가 아닌 브라우저이기 때문에 브라우저에서 작동될 때 서버 템플릿 엔진의 손을 벗어나 제어가 불가능하다.
* Vue, React를 이용한 SPA는 브라우저에서 화면을 생성하며 이미 코드가 서버에서 벗어난 경우이기 때문에 서버에서는 JSON 혹은 XML 형식의 데이터만 전달하고 클라이언트에서 조립한다.
* 최근에는 React, Vue 같은 자바스크립트 프레임워크에서 서버사이드 랜더링을 지원한다.
### 2. 머스테치
* 머스테치는 수 많은 언어들을 지원하는 가장 심플한 템플릿 엔진이다.
* 머스테치는 다른 템플릿 엔진보다 심플하며, 로직 코등를 사용할 수 없어 View의 역할과 서버의 역할을 명확하게 분리한다.
* 머스테치는 자바스크립트와 자바 두 가지가 모두 있기 때문에 하나의 문법으로 클라이언트와 서버 템플릿을 모두 사용가능하다.
* 템플릿 엔진은 화면의 역할에만 충실하는 것이 좋기 떄문에 너무 많은 기능을 제공하면 API와 템플릿 엔진, 자바 스크립트가 서로 로직을 나누어갖게되어 유지보수가 힘들어진다.
|
1.0
|
머스테치로 화면 구성하기 - ### 1. 템플릿 엔진
* 템플릿 엔진이란 지정된 템플릿 양식과 데이터가 합쳐져서 HTML 문서를 출력하는 소프트웨어를 의미하며 크게 서버 템플릿 엔진과 클라이언트 템플릿 엔진으로 나누어진다.
* JSP, Freemarker 등을 서버 템플릿 엔진이라고 하며 Vue, React 등이 클라이언트 템플릿 엔진에 해당한다.
* 서버 템플릿 엔진과 클라이언트 템플릿 엔진의 코드가 실행되는 영역에 차이가 있는데 서버 템플릿 엔진을 이용한 화면 생성은 서버에서 자바 코드로 문자열을 만든 뒤 이를 HTML로 변환하여 브라우저로 전달하며 이 때 자바스크립트 코드는 단순 문자열로 취급한다.
* 자바스크립트는 브라우저 위에서 동작하며 자바 스크립트가 실행되는 장소는 서버가 아닌 브라우저이기 때문에 브라우저에서 작동될 때 서버 템플릿 엔진의 손을 벗어나 제어가 불가능하다.
* Vue, React를 이용한 SPA는 브라우저에서 화면을 생성하며 이미 코드가 서버에서 벗어난 경우이기 때문에 서버에서는 JSON 혹은 XML 형식의 데이터만 전달하고 클라이언트에서 조립한다.
* 최근에는 React, Vue 같은 자바스크립트 프레임워크에서 서버사이드 랜더링을 지원한다.
### 2. 머스테치
* 머스테치는 수 많은 언어들을 지원하는 가장 심플한 템플릿 엔진이다.
* 머스테치는 다른 템플릿 엔진보다 심플하며, 로직 코등를 사용할 수 없어 View의 역할과 서버의 역할을 명확하게 분리한다.
* 머스테치는 자바스크립트와 자바 두 가지가 모두 있기 때문에 하나의 문법으로 클라이언트와 서버 템플릿을 모두 사용가능하다.
* 템플릿 엔진은 화면의 역할에만 충실하는 것이 좋기 떄문에 너무 많은 기능을 제공하면 API와 템플릿 엔진, 자바 스크립트가 서로 로직을 나누어갖게되어 유지보수가 힘들어진다.
|
process
|
머스테치로 화면 구성하기 템플릿 엔진 템플릿 엔진이란 지정된 템플릿 양식과 데이터가 합쳐져서 html 문서를 출력하는 소프트웨어를 의미하며 크게 서버 템플릿 엔진과 클라이언트 템플릿 엔진으로 나누어진다 jsp freemarker 등을 서버 템플릿 엔진이라고 하며 vue react 등이 클라이언트 템플릿 엔진에 해당한다 서버 템플릿 엔진과 클라이언트 템플릿 엔진의 코드가 실행되는 영역에 차이가 있는데 서버 템플릿 엔진을 이용한 화면 생성은 서버에서 자바 코드로 문자열을 만든 뒤 이를 html로 변환하여 브라우저로 전달하며 이 때 자바스크립트 코드는 단순 문자열로 취급한다 자바스크립트는 브라우저 위에서 동작하며 자바 스크립트가 실행되는 장소는 서버가 아닌 브라우저이기 때문에 브라우저에서 작동될 때 서버 템플릿 엔진의 손을 벗어나 제어가 불가능하다 vue react를 이용한 spa는 브라우저에서 화면을 생성하며 이미 코드가 서버에서 벗어난 경우이기 때문에 서버에서는 json 혹은 xml 형식의 데이터만 전달하고 클라이언트에서 조립한다 최근에는 react vue 같은 자바스크립트 프레임워크에서 서버사이드 랜더링을 지원한다 머스테치 머스테치는 수 많은 언어들을 지원하는 가장 심플한 템플릿 엔진이다 머스테치는 다른 템플릿 엔진보다 심플하며 로직 코등를 사용할 수 없어 view의 역할과 서버의 역할을 명확하게 분리한다 머스테치는 자바스크립트와 자바 두 가지가 모두 있기 때문에 하나의 문법으로 클라이언트와 서버 템플릿을 모두 사용가능하다 템플릿 엔진은 화면의 역할에만 충실하는 것이 좋기 떄문에 너무 많은 기능을 제공하면 api와 템플릿 엔진 자바 스크립트가 서로 로직을 나누어갖게되어 유지보수가 힘들어진다
| 1
|
75,661
| 15,439,661,814
|
IssuesEvent
|
2021-03-08 01:01:02
|
RG4421/skyux-sdk-builder
|
https://api.github.com/repos/RG4421/skyux-sdk-builder
|
opened
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: skyux-sdk-builder/package.json</p>
<p>Path to vulnerable library: skyux-sdk-builder/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.0.9.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.2.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: 1.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.9;socket.io:2.3.0;socket.io-client:2.3.0;engine.io-client:3.4.2;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.7.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28502","vulnerabilityDetails":"This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async\u003dFalse on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: skyux-sdk-builder/package.json</p>
<p>Path to vulnerable library: skyux-sdk-builder/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.0.9.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.2.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: 1.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.9;socket.io:2.3.0;socket.io-client:2.3.0;engine.io-client:3.4.2;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.7.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28502","vulnerabilityDetails":"This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async\u003dFalse on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file skyux sdk builder package json path to vulnerable library skyux sdk builder node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma socket io socket io client engine io client xmlhttprequest ssl isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run vulnerabilityurl
| 0
|
14,092
| 16,982,354,414
|
IssuesEvent
|
2021-06-30 10:25:19
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Map content block for Process Groups
|
contract: process-groups
|
Ref. PG02-3
**Is your feature request related to a problem? Please describe.**
As a visitor, I want to easily see in which territorial areas are every one of the different processes inside of a Process Group.
**Describe the solution you'd like**
There's already a module made by Platoniq that works with this kind of maps and lets you do polygons and other shapes:
https://github.com/Platoniq/decidim-module-navigation_maps
At the technical level, this issue would involve:
- [x] Making the changes in the Content Block as described in PG02
- [x] Making a PR to Platoniq/decidim-module-navigation_maps for supporting this new kind of Content Blocks
- [x] Making a PR to AjuntamentdeBarcelona/decidim-barcelona for adding this module to that installation
**Describe alternatives you've considered**
To re-implement this Content Block in Decidim but that would be too expensive and goes against the main ideas of Decidim architecture (modular)
We could also implement this on HTML but that would be a pain to implement/configure.
**Additional context**
As context, one of the Process Groups that we want to have is the one related with Pla de Barris, which are participatory processes on multiple Barcelona's neighborhoods. In [the Pla de Barris web](https://pladebarris.barcelona) they had this map:

Also, the other PG that we're planning to have, one from [Superilles](https://ajuntament.barcelona.cat/superilles/es/), they also have a map (n this case with all the Districts) for this project:

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I want to see a map where I can locate the different processes
- [x] As an administrator, I want to be able to define which regions you partner with which processes
|
1.0
|
Map content block for Process Groups - Ref. PG02-3
**Is your feature request related to a problem? Please describe.**
As a visitor, I want to easily see in which territorial areas are every one of the different processes inside of a Process Group.
**Describe the solution you'd like**
There's already a module made by Platoniq that works with this kind of maps and lets you do polygons and other shapes:
https://github.com/Platoniq/decidim-module-navigation_maps
At the technical level, this issue would involve:
- [x] Making the changes in the Content Block as described in PG02
- [x] Making a PR to Platoniq/decidim-module-navigation_maps for supporting this new kind of Content Blocks
- [x] Making a PR to AjuntamentdeBarcelona/decidim-barcelona for adding this module to that installation
**Describe alternatives you've considered**
To re-implement this Content Block in Decidim but that would be too expensive and goes against the main ideas of Decidim architecture (modular)
We could also implement this on HTML but that would be a pain to implement/configure.
**Additional context**
As context, one of the Process Groups that we want to have is the one related with Pla de Barris, which are participatory processes on multiple Barcelona's neighborhoods. In [the Pla de Barris web](https://pladebarris.barcelona) they had this map:

Also, the other PG that we're planning to have, one from [Superilles](https://ajuntament.barcelona.cat/superilles/es/), they also have a map (n this case with all the Districts) for this project:

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I want to see a map where I can locate the different processes
- [x] As an administrator, I want to be able to define which regions you partner with which processes
|
process
|
map content block for process groups ref is your feature request related to a problem please describe as a visitor i want to easily see in which territorial areas are every one of the different processes inside of a process group describe the solution you d like there s already a module made by platoniq that works with this kind of maps and lets you do polygons and other shapes at the technical level this issue would involve making the changes in the content block as described in making a pr to platoniq decidim module navigation maps for supporting this new kind of content blocks making a pr to ajuntamentdebarcelona decidim barcelona for adding this module to that installation describe alternatives you ve considered to re implement this content block in decidim but that would be too expensive and goes against the main ideas of decidim architecture modular we could also implement this on html but that would be a pain to implement configure additional context as context one of the process groups that we want to have is the one related with pla de barris which are participatory processes on multiple barcelona s neighborhoods in they had this map also the other pg that we re planning to have one from they also have a map n this case with all the districts for this project does this issue could impact on users private data no acceptance criteria as a visitor i want to see a map where i can locate the different processes as an administrator i want to be able to define which regions you partner with which processes
| 1
|
15,143
| 18,895,722,357
|
IssuesEvent
|
2021-11-15 17:39:43
|
prisma/language-tools
|
https://api.github.com/repos/prisma/language-tools
|
opened
|
Provider-aware auto completion is no applied to all auto completions
|
bug/1-repro-available kind/bug process/candidate topic: autocompletion team/migrations
|
https://github.com/prisma/language-tools/pull/937 made `Restrict` disappear from some auto completions, but not all.
More details: https://prisma-company.slack.com/archives/CKCQQGXJM/p1636997912022500
|
1.0
|
Provider-aware auto completion is no applied to all auto completions - https://github.com/prisma/language-tools/pull/937 made `Restrict` disappear from some auto completions, but not all.
More details: https://prisma-company.slack.com/archives/CKCQQGXJM/p1636997912022500
|
process
|
provider aware auto completion is no applied to all auto completions made restrict disappear from some auto completions but not all more details
| 1
|
4,376
| 7,260,516,281
|
IssuesEvent
|
2018-02-18 10:54:55
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Add choice of simplification method to simplify
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/56b77db88c4b617f254a7fa1309f86ee7d3cedfa by nyalldawson
This change allows users to choose which method to use when running
the simplify geometries algorithm, with choices of the existing
distance based (Douglas Peucker) algorithm, area based (Visvalingam)
algorithm and snap-to-grid.
Visvaligam in particular usually results in more cartographically
pleasing simplification over the standard distance based methods.
|
1.0
|
[FEATURE][processing] Add choice of simplification method to simplify - Original commit: https://github.com/qgis/QGIS/commit/56b77db88c4b617f254a7fa1309f86ee7d3cedfa by nyalldawson
This change allows users to choose which method to use when running
the simplify geometries algorithm, with choices of the existing
distance based (Douglas Peucker) algorithm, area based (Visvalingam)
algorithm and snap-to-grid.
Visvaligam in particular usually results in more cartographically
pleasing simplification over the standard distance based methods.
|
process
|
add choice of simplification method to simplify original commit by nyalldawson this change allows users to choose which method to use when running the simplify geometries algorithm with choices of the existing distance based douglas peucker algorithm area based visvalingam algorithm and snap to grid visvaligam in particular usually results in more cartographically pleasing simplification over the standard distance based methods
| 1
|
15,031
| 18,755,072,557
|
IssuesEvent
|
2021-11-05 09:41:16
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Crash on multiline defaults introspection on MSSQL
|
bug/1-repro-available kind/bug process/candidate topic: introspection topic: error reporting topic: sql server team/migrations
|
<!-- If required, please update the title to be clear and descriptive -->
Error: [libs/sql-schema-describer/src/mssql.rs:315:30] called `Result::unwrap()` on an `Err` value: "Couldn't parse default value: `\r\n/****** Object: Default [dbo].[Default_0] Script Date: 04/02/2009 17:47:36 ******/\r\ncreate default [dbo].[Default_P] as 'P'\r\n\r\n\r\n`"
Command: `prisma db pull`
Version: `3.3.0`
Binary Version: `33838b0f78f1fe9052cf9a00e9761c9dc097a63c`
Report: https://prisma-errors.netlify.app/report/13560
OS: `x64 linux 4.19.128-microsoft-standard`
JS Stacktrace:
```
Error: [libs/sql-schema-describer/src/mssql.rs:315:30] called `Result::unwrap()` on an `Err` value: "Couldn't parse default value: `\r\n/****** Object: Default [dbo].[Default_0] Script Date: 04/02/2009 17:47:36 ******/\r\ncreate default [dbo].[Default_P] as 'P'\r\n\r\n\r\n`"
at ChildProcess.<anonymous> (/server/node_modules/prisma/build/index.js:45792:30)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:626:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:519:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/sys_common/backtrace.rs:141:18
5: rust_begin_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:515:5
6: core::panicking::panic_fmt
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/panicking.rs:92:14
7: core::result::unwrap_failed
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/result.rs:1599:5
8: <sql_schema_describer::mssql::SqlSchemaDescriber as sql_schema_describer::SqlSchemaDescriberBackend>::describe::{{closure}}::{{closure}}
9: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
17: introspection_engine::main::{{closure}}
18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
19: introspection_engine::main
20: std::sys_common::backtrace::__rust_begin_short_backtrace
21: std::rt::lang_start::{{closure}}
22: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/ops/function.rs:259:13
std::panicking::try::do_call
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:401:40
std::panicking::try
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:365:19
std::panic::catch_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panic.rs:434:14
std::rt::lang_start_internal::{{closure}}
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/rt.rs:45:48
std::panicking::try::do_call
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:401:40
std::panicking::try
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:365:19
std::panic::catch_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panic.rs:434:14
std::rt::lang_start_internal
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/rt.rs:45:20
23: std::rt::lang_start
24: __libc_start_main
25: <unknown>
```
|
1.0
|
Crash on multiline defaults introspection on MSSQL - <!-- If required, please update the title to be clear and descriptive -->
Error: [libs/sql-schema-describer/src/mssql.rs:315:30] called `Result::unwrap()` on an `Err` value: "Couldn't parse default value: `\r\n/****** Object: Default [dbo].[Default_0] Script Date: 04/02/2009 17:47:36 ******/\r\ncreate default [dbo].[Default_P] as 'P'\r\n\r\n\r\n`"
Command: `prisma db pull`
Version: `3.3.0`
Binary Version: `33838b0f78f1fe9052cf9a00e9761c9dc097a63c`
Report: https://prisma-errors.netlify.app/report/13560
OS: `x64 linux 4.19.128-microsoft-standard`
JS Stacktrace:
```
Error: [libs/sql-schema-describer/src/mssql.rs:315:30] called `Result::unwrap()` on an `Err` value: "Couldn't parse default value: `\r\n/****** Object: Default [dbo].[Default_0] Script Date: 04/02/2009 17:47:36 ******/\r\ncreate default [dbo].[Default_P] as 'P'\r\n\r\n\r\n`"
at ChildProcess.<anonymous> (/server/node_modules/prisma/build/index.js:45792:30)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:626:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:519:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/sys_common/backtrace.rs:141:18
5: rust_begin_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:515:5
6: core::panicking::panic_fmt
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/panicking.rs:92:14
7: core::result::unwrap_failed
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/result.rs:1599:5
8: <sql_schema_describer::mssql::SqlSchemaDescriber as sql_schema_describer::SqlSchemaDescriberBackend>::describe::{{closure}}::{{closure}}
9: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
17: introspection_engine::main::{{closure}}
18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
19: introspection_engine::main
20: std::sys_common::backtrace::__rust_begin_short_backtrace
21: std::rt::lang_start::{{closure}}
22: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/ops/function.rs:259:13
std::panicking::try::do_call
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:401:40
std::panicking::try
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:365:19
std::panic::catch_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panic.rs:434:14
std::rt::lang_start_internal::{{closure}}
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/rt.rs:45:48
std::panicking::try::do_call
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:401:40
std::panicking::try
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:365:19
std::panic::catch_unwind
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panic.rs:434:14
std::rt::lang_start_internal
at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/rt.rs:45:20
23: std::rt::lang_start
24: __libc_start_main
25: <unknown>
```
|
process
|
crash on multiline defaults introspection on mssql error called result unwrap on an err value couldn t parse default value r n object default script date r ncreate default as p r n r n r n command prisma db pull version binary version report os linux microsoft standard js stacktrace error called result unwrap on an err value couldn t parse default value r n object default script date r ncreate default as p r n r n r n at childprocess server node modules prisma build index js at childprocess emit events js at process childprocess handle onexit internal child process js rust stacktrace user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core result unwrap failed at rustc library core src result rs describe closure closure as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure core ops function impls for f call once at rustc library core src ops function rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal closure at rustc library std src rt rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal at rustc library std src rt rs std rt lang start libc start main
| 1
|
9,909
| 12,949,934,259
|
IssuesEvent
|
2020-07-19 11:11:53
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
XHTML output fails to load due to self-closing script tags
|
bug postprocessing
|
If you specify output format XHTML via the `--format` parameter, *and* specify the javascript source via `--javascript`, you end up with an XHTML file that fails to load due to a self-closing script tag.
The following example commands use the sample file attached, using LaTeXML version 0.8.4 installed on OSX via homebrew.
This fails:
`latexmlc math.tex --dest=math.html --format=xhtml --javascript=LaTeXML-maybeMathJax.js`
The reason is this script tag in the output:
`<script src="LaTeXML-maybeMathJax.js" type="text/javascript"/>`
(see [https://stackoverflow.com/a/69984](https://stackoverflow.com/a/69984) for discussion of non-empty self-closing tags in xhtml)
This, however, succeeds:
`latexmlc math.tex --dest=math.xhtml --javascript=LaTeXML-maybeMathJax.js`
The script tag is closed correctly in the output:
`<script src="LaTeXML-maybeMathJax.js" type="text/javascript"></script>`
The same behaviour was observed when calling `latexml` then `latexmlpost`.
[math.tex.zip](https://github.com/brucemiller/LaTeXML/files/4934195/math.tex.zip)
|
1.0
|
XHTML output fails to load due to self-closing script tags - If you specify output format XHTML via the `--format` parameter, *and* specify the javascript source via `--javascript`, you end up with an XHTML file that fails to load due to a self-closing script tag.
The following example commands use the sample file attached, using LaTeXML version 0.8.4 installed on OSX via homebrew.
This fails:
`latexmlc math.tex --dest=math.html --format=xhtml --javascript=LaTeXML-maybeMathJax.js`
The reason is this script tag in the output:
`<script src="LaTeXML-maybeMathJax.js" type="text/javascript"/>`
(see [https://stackoverflow.com/a/69984](https://stackoverflow.com/a/69984) for discussion of non-empty self-closing tags in xhtml)
This, however, succeeds:
`latexmlc math.tex --dest=math.xhtml --javascript=LaTeXML-maybeMathJax.js`
The script tag is closed correctly in the output:
`<script src="LaTeXML-maybeMathJax.js" type="text/javascript"></script>`
The same behaviour was observed when calling `latexml` then `latexmlpost`.
[math.tex.zip](https://github.com/brucemiller/LaTeXML/files/4934195/math.tex.zip)
|
process
|
xhtml output fails to load due to self closing script tags if you specify output format xhtml via the format parameter and specify the javascript source via javascript you end up with an xhtml file that fails to load due to a self closing script tag the following example commands use the sample file attached using latexml version installed on osx via homebrew this fails latexmlc math tex dest math html format xhtml javascript latexml maybemathjax js the reason is this script tag in the output see for discussion of non empty self closing tags in xhtml this however succeeds latexmlc math tex dest math xhtml javascript latexml maybemathjax js the script tag is closed correctly in the output the same behaviour was observed when calling latexml then latexmlpost
| 1
|
169,729
| 6,415,676,502
|
IssuesEvent
|
2017-08-08 13:20:28
|
FStarLang/FStar
|
https://api.github.com/repos/FStarLang/FStar
|
opened
|
--using_facts_from is not properly reset
|
bug Priority 1 usability
|
While tracking a recent regression in hacl-star, I found that a `--using_facts_from` option that was set in a dependency wasn't properly reset and caused some subsequent queries to fail:
```module M
type t = | T: n:int -> t
#reset-options "--using_facts_from Prims" // Works if this line is commented out
#reset-options "--z3refresh"
let x : v:t{T?.n v == 0} = T 0
```
|
1.0
|
--using_facts_from is not properly reset - While tracking a recent regression in hacl-star, I found that a `--using_facts_from` option that was set in a dependency wasn't properly reset and caused some subsequent queries to fail:
```module M
type t = | T: n:int -> t
#reset-options "--using_facts_from Prims" // Works if this line is commented out
#reset-options "--z3refresh"
let x : v:t{T?.n v == 0} = T 0
```
|
non_process
|
using facts from is not properly reset while tracking a recent regression in hacl star i found that a using facts from option that was set in a dependency wasn t properly reset and caused some subsequent queries to fail module m type t t n int t reset options using facts from prims works if this line is commented out reset options let x v t t n v t
| 0
|
15,040
| 18,761,896,636
|
IssuesEvent
|
2021-11-05 17:28:10
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Fix some labels (Request in QGIS)
|
Processing Alg Mesh 3.22
|
### Request for documentation
From pull request QGIS/qgis#45549
Author: @DelazJ
QGIS version: 3.22
**Fix some labels**
### PR Description:
in mesh calculator dialog and join by location summary algorithm
### Commits tagged with [need-docs] or [FEATURE]
"\n\n\n\n Correct name for mesh calculator output" "\n\n\n\n Use same label for input layer\n\nas in the \"join attributes by location\" alg"
|
1.0
|
Fix some labels (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#45549
Author: @DelazJ
QGIS version: 3.22
**Fix some labels**
### PR Description:
in mesh calculator dialog and join by location summary algorithm
### Commits tagged with [need-docs] or [FEATURE]
"\n\n\n\n Correct name for mesh calculator output" "\n\n\n\n Use same label for input layer\n\nas in the \"join attributes by location\" alg"
|
process
|
fix some labels request in qgis request for documentation from pull request qgis qgis author delazj qgis version fix some labels pr description in mesh calculator dialog and join by location summary algorithm commits tagged with or n n n n correct name for mesh calculator output n n n n use same label for input layer n nas in the join attributes by location alg
| 1
|
8,748
| 11,872,791,948
|
IssuesEvent
|
2020-03-26 16:19:42
|
jyn514/rcc
|
https://api.github.com/repos/jyn514/rcc
|
closed
|
Balanced parentheses in macro arguments are not handled
|
bug preprocessor
|
### Expected behavior
Nested balanced parentheses are handled as part of a macro argument
```console
> cc -E - <<END
#define foo(x, y) { x, y }
foo(5 (6), 7)
END
# 1 "<stdin>"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 32 "<command-line>" 2
# 1 "<stdin>"
{ 5 (6), 7 }
```
### Code
```c
#define foo(x, y) { x, y }
foo(5 (6), 7)
```
```console
<stdin>:2:4 error: invalid macro: wrong number of arguments: expected 1, got 2
foo(5 (6), 7)
^^^^^^
1 warning and 1 error generated
```
### Standardese
> The replaced sequence of preprocessing tokens is terminated by the matching ) preprocessing token, skipping intervening matched pairs of left and right parenthesis preprocessing tokens.
> — [§6.10.3 ¶10](http://port70.net/~nsz/c/c11/n1570.html#6.10.3p10)
|
1.0
|
Balanced parentheses in macro arguments are not handled - ### Expected behavior
Nested balanced parentheses are handled as part of a macro argument
```console
> cc -E - <<END
#define foo(x, y) { x, y }
foo(5 (6), 7)
END
# 1 "<stdin>"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 32 "<command-line>" 2
# 1 "<stdin>"
{ 5 (6), 7 }
```
### Code
```c
#define foo(x, y) { x, y }
foo(5 (6), 7)
```
```console
<stdin>:2:4 error: invalid macro: wrong number of arguments: expected 1, got 2
foo(5 (6), 7)
^^^^^^
1 warning and 1 error generated
```
### Standardese
> The replaced sequence of preprocessing tokens is terminated by the matching ) preprocessing token, skipping intervening matched pairs of left and right parenthesis preprocessing tokens.
> — [§6.10.3 ¶10](http://port70.net/~nsz/c/c11/n1570.html#6.10.3p10)
|
process
|
balanced parentheses in macro arguments are not handled expected behavior nested balanced parentheses are handled as part of a macro argument console cc e end define foo x y x y foo end usr include stdc predef h code c define foo x y x y foo console error invalid macro wrong number of arguments expected got foo warning and error generated standardese the replaced sequence of preprocessing tokens is terminated by the matching preprocessing token skipping intervening matched pairs of left and right parenthesis preprocessing tokens mdash
| 1
|
22,137
| 30,683,338,969
|
IssuesEvent
|
2023-07-26 10:36:21
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Adjusting the qualified queue
|
area:beatmap-processing
|
This was also discussed during the community meeting, so if there's something from that not mentioned here please add it as a reply.
Currently the qualified queue for the osu! game mode is making maps sit in qualified for around 2 weeks.
**Problem**
There is multiple problems with this:
- Maps at over 6 days can only hold their place in the queue for up to 6 days. This means maps beyond this point that are disqualified could end up in qualified for 3 or 4 weeks instead of just 1 in total.
- - This discourages people from making minor changes when maps pass the 6 day mark. Most maps would be disqualified before then anyways, as the norm is to check them when they're first qualified, or within the first week just like maps qualified when the queue isn't overflowing.
- The queue is designed to help balance out incoming ranked maps. However, when it gets this bloated you would have to have a long dead period for any balancing to occur, which leads to the queue being more likely to grow than shrink when it hits this point.
- - It was at 9/10 days late December and earlier this month, but is now at 13/14 days for instance.
**Solutions**
- Raise the cap for osu!. Other modes are fine at 10 for now and rarely have that many qualified maps at a time.
- Implement a system that is naturally more flexible, instead of having to raise the hard cap each year as the community grows. Make it automatic so we can stop repeating this conversation every time
- - "like maybe increase the ranked maps per day by 1 when it reaches like 50-60, by 2 when it reaches 70, something like that" - Dada's idea
- - - Could adjust the numbers a bit, but I generally agree this is the best option. This would allow the ranked flow to stay mostly balanced without having a hard cap that leads to the Qualified queue increasing excessively in size.
- Could additionally let FA maps bypass the daily ranked cap - mentioned as an idea in the meeting
- - From what I can see, this would be good for the FA maps but rarely anyone else. This would also be weird for maps getting qualified prior to FA release, such as showcase maps. Would they get the full queue (2+ weeks), potentially delaying players getting to experience the new ranked map, or would they bypass the queue, spoiling the fact that it's FA?
_________
Thank you and I hope this issue can be addressed soon. If anyone has other ideas as well please do mention them in a comment.
|
1.0
|
Adjusting the qualified queue - This was also discussed during the community meeting, so if there's something from that not mentioned here please add it as a reply.
Currently the qualified queue for the osu! game mode is making maps sit in qualified for around 2 weeks.
**Problem**
There is multiple problems with this:
- Maps at over 6 days can only hold their place in the queue for up to 6 days. This means maps beyond this point that are disqualified could end up in qualified for 3 or 4 weeks instead of just 1 in total.
- - This discourages people from making minor changes when maps pass the 6 day mark. Most maps would be disqualified before then anyways, as the norm is to check them when they're first qualified, or within the first week just like maps qualified when the queue isn't overflowing.
- The queue is designed to help balance out incoming ranked maps. However, when it gets this bloated you would have to have a long dead period for any balancing to occur, which leads to the queue being more likely to grow than shrink when it hits this point.
- - It was at 9/10 days late December and earlier this month, but is now at 13/14 days for instance.
**Solutions**
- Raise the cap for osu!. Other modes are fine at 10 for now and rarely have that many qualified maps at a time.
- Implement a system that is naturally more flexible, instead of having to raise the hard cap each year as the community grows. Make it automatic so we can stop repeating this conversation every time
- - "like maybe increase the ranked maps per day by 1 when it reaches like 50-60, by 2 when it reaches 70, something like that" - Dada's idea
- - - Could adjust the numbers a bit, but I generally agree this is the best option. This would allow the ranked flow to stay mostly balanced without having a hard cap that leads to the Qualified queue increasing excessively in size.
- Could additionally let FA maps bypass the daily ranked cap - mentioned as an idea in the meeting
- - From what I can see, this would be good for the FA maps but rarely anyone else. This would also be weird for maps getting qualified prior to FA release, such as showcase maps. Would they get the full queue (2+ weeks), potentially delaying players getting to experience the new ranked map, or would they bypass the queue, spoiling the fact that it's FA?
_________
Thank you and I hope this issue can be addressed soon. If anyone has other ideas as well please do mention them in a comment.
|
process
|
adjusting the qualified queue this was also discussed during the community meeting so if there s something from that not mentioned here please add it as a reply currently the qualified queue for the osu game mode is making maps sit in qualified for around weeks problem there is multiple problems with this maps at over days can only hold their place in the queue for up to days this means maps beyond this point that are disqualified could end up in qualified for or weeks instead of just in total this discourages people from making minor changes when maps pass the day mark most maps would be disqualified before then anyways as the norm is to check them when they re first qualified or within the first week just like maps qualified when the queue isn t overflowing the queue is designed to help balance out incoming ranked maps however when it gets this bloated you would have to have a long dead period for any balancing to occur which leads to the queue being more likely to grow than shrink when it hits this point it was at days late december and earlier this month but is now at days for instance solutions raise the cap for osu other modes are fine at for now and rarely have that many qualified maps at a time implement a system that is naturally more flexible instead of having to raise the hard cap each year as the community grows make it automatic so we can stop repeating this conversation every time like maybe increase the ranked maps per day by when it reaches like by when it reaches something like that dada s idea could adjust the numbers a bit but i generally agree this is the best option this would allow the ranked flow to stay mostly balanced without having a hard cap that leads to the qualified queue increasing excessively in size could additionally let fa maps bypass the daily ranked cap mentioned as an idea in the meeting from what i can see this would be good for the fa maps but rarely anyone else this would also be weird for maps getting qualified prior to fa release such as showcase maps would they get the full queue weeks potentially delaying players getting to experience the new ranked map or would they bypass the queue spoiling the fact that it s fa thank you and i hope this issue can be addressed soon if anyone has other ideas as well please do mention them in a comment
| 1
|
68,021
| 21,415,942,214
|
IssuesEvent
|
2022-04-22 10:50:22
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Margin between deleted messages in Bubble Layout
|
T-Defect
|
### Steps to reproduce
1. Go to app.element.io
2. Be sure that the chats are in Bubble Layout
3. in a 1-1 room, delete a few messages
4. This happens
Message bubbles are perfectly sandwiched on one another which should be avoided.

### Outcome
#### What did you expect?
Sufficient space between message bubbles
#### What happened instead?

### Operating system
Windows 10
### Browser information
Edge
### URL for webapp
app.element.io
### Application version
Element version: 1.10.10 Olm version: 3.2.8
### Homeserver
Matrix
### Will you send logs?
No
|
1.0
|
Margin between deleted messages in Bubble Layout - ### Steps to reproduce
1. Go to app.element.io
2. Be sure that the chats are in Bubble Layout
3. in a 1-1 room, delete a few messages
4. This happens
Message bubbles are perfectly sandwiched on one another which should be avoided.

### Outcome
#### What did you expect?
Sufficient space between message bubbles
#### What happened instead?

### Operating system
Windows 10
### Browser information
Edge
### URL for webapp
app.element.io
### Application version
Element version: 1.10.10 Olm version: 3.2.8
### Homeserver
Matrix
### Will you send logs?
No
|
non_process
|
margin between deleted messages in bubble layout steps to reproduce go to app element io be sure that the chats are in bubble layout in a room delete a few messages this happens message bubbles are perfectly sandwiched on one another which should be avoided outcome what did you expect sufficient space between message bubbles what happened instead operating system windows browser information edge url for webapp app element io application version element version olm version homeserver matrix will you send logs no
| 0
|
21,530
| 29,819,840,744
|
IssuesEvent
|
2023-06-17 00:17:34
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Implement/expose `suggestedJoinCondition` function
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
When building a join against a table with a known FK->PK relationship (the source table has an FK that points to the PK of the table to be joined), in current MLv1 behavior the condition is automatically populated with those columns. Examples:
###### Reviews -> Products
<img width="923" alt="image" src="https://github.com/metabase/metabase/assets/1455846/8bd6d324-4313-420d-b6b1-e86d74cf936a">
Note that this only seems to work with the Source Table (or previous stage) has an FK to the PK of the table. No such default condition is applied when flipping the tables around, for example:
###### Products -> Reviews
<img width="953" alt="image" src="https://github.com/metabase/metabase/assets/1455846/264cafad-74a6-4977-804a-2336a76056e5">
Or if a previously-joined Table has the FK, rather than the source Table:
###### Products -> Orders -> People
<img width="929" alt="image" src="https://github.com/metabase/metabase/assets/1455846/ff6016dc-7c5b-4c56-84a2-15bf8f68f0fb">
### Signature
I think the appropriate signature for this function should be something like
```ts
export function suggestedJoinCondition(query: Query, stageNumber: integer, tableToJoin: TableMetadata): Clause? { ... }
```
and should return an MBQL clause representing the suggested join condition if one exists, otherwise `nil`/`null`/etc. if no such suggested condition exists.
Note that while we could make this a little smarter and handle PK -> FK as well I think for now we should just try to replicate the old behavior.
|
1.0
|
[MLv2] Implement/expose `suggestedJoinCondition` function - When building a join against a table with a known FK->PK relationship (the source table has an FK that points to the PK of the table to be joined), in current MLv1 behavior the condition is automatically populated with those columns. Examples:
###### Reviews -> Products
<img width="923" alt="image" src="https://github.com/metabase/metabase/assets/1455846/8bd6d324-4313-420d-b6b1-e86d74cf936a">
Note that this only seems to work with the Source Table (or previous stage) has an FK to the PK of the table. No such default condition is applied when flipping the tables around, for example:
###### Products -> Reviews
<img width="953" alt="image" src="https://github.com/metabase/metabase/assets/1455846/264cafad-74a6-4977-804a-2336a76056e5">
Or if a previously-joined Table has the FK, rather than the source Table:
###### Products -> Orders -> People
<img width="929" alt="image" src="https://github.com/metabase/metabase/assets/1455846/ff6016dc-7c5b-4c56-84a2-15bf8f68f0fb">
### Signature
I think the appropriate signature for this function should be something like
```ts
export function suggestedJoinCondition(query: Query, stageNumber: integer, tableToJoin: TableMetadata): Clause? { ... }
```
and should return an MBQL clause representing the suggested join condition if one exists, otherwise `nil`/`null`/etc. if no such suggested condition exists.
Note that while we could make this a little smarter and handle PK -> FK as well I think for now we should just try to replicate the old behavior.
|
process
|
implement expose suggestedjoincondition function when building a join against a table with a known fk pk relationship the source table has an fk that points to the pk of the table to be joined in current behavior the condition is automatically populated with those columns examples reviews products img width alt image src note that this only seems to work with the source table or previous stage has an fk to the pk of the table no such default condition is applied when flipping the tables around for example products reviews img width alt image src or if a previously joined table has the fk rather than the source table products orders people img width alt image src signature i think the appropriate signature for this function should be something like ts export function suggestedjoincondition query query stagenumber integer tabletojoin tablemetadata clause and should return an mbql clause representing the suggested join condition if one exists otherwise nil null etc if no such suggested condition exists note that while we could make this a little smarter and handle pk fk as well i think for now we should just try to replicate the old behavior
| 1
|
186,549
| 21,944,190,909
|
IssuesEvent
|
2022-05-23 21:40:45
|
CMSgov/cms-carts-seds
|
https://api.github.com/repos/CMSgov/cms-carts-seds
|
closed
|
SHF - cms-carts-seds - main - MEDIUM - Instance i-0cd9e25dff7e97a2f is vulnerable to CVE-2020-0404
|
security-hub main
|
**************************************************************
__This issue was generated from Security Hub data and is managed through automation.__
Please do not edit the title or body of this issue, or remove the security-hub tag. All other edits/comments are welcome.
Finding Id: inspector/us-east-1/519095364708/a85cd1107853190f6e27410b806effd8d93770a1
**************************************************************
## Type of Issue:
- [x] Security Hub Finding
## Title:
Instance i-0cd9e25dff7e97a2f is vulnerable to CVE-2020-0404
## Id:
inspector/us-east-1/519095364708/a85cd1107853190f6e27410b806effd8d93770a1
(You may use this ID to lookup this finding's details in Security Hub)
## Description
In uvc_scan_chain_forward of uvc_driver.c, there is a possible linked list corruption due to an unusual root cause. This could lead to local escalation of privilege in the kernel with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-111893654References: Upstream kernel
## Remediation
undefined
## AC:
- The security hub finding is resolved or suppressed, indicated by a Workflow Status of Resolved or Suppressed.
|
True
|
SHF - cms-carts-seds - main - MEDIUM - Instance i-0cd9e25dff7e97a2f is vulnerable to CVE-2020-0404 - **************************************************************
__This issue was generated from Security Hub data and is managed through automation.__
Please do not edit the title or body of this issue, or remove the security-hub tag. All other edits/comments are welcome.
Finding Id: inspector/us-east-1/519095364708/a85cd1107853190f6e27410b806effd8d93770a1
**************************************************************
## Type of Issue:
- [x] Security Hub Finding
## Title:
Instance i-0cd9e25dff7e97a2f is vulnerable to CVE-2020-0404
## Id:
inspector/us-east-1/519095364708/a85cd1107853190f6e27410b806effd8d93770a1
(You may use this ID to lookup this finding's details in Security Hub)
## Description
In uvc_scan_chain_forward of uvc_driver.c, there is a possible linked list corruption due to an unusual root cause. This could lead to local escalation of privilege in the kernel with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-111893654References: Upstream kernel
## Remediation
undefined
## AC:
- The security hub finding is resolved or suppressed, indicated by a Workflow Status of Resolved or Suppressed.
|
non_process
|
shf cms carts seds main medium instance i is vulnerable to cve this issue was generated from security hub data and is managed through automation please do not edit the title or body of this issue or remove the security hub tag all other edits comments are welcome finding id inspector us east type of issue security hub finding title instance i is vulnerable to cve id inspector us east you may use this id to lookup this finding s details in security hub description in uvc scan chain forward of uvc driver c there is a possible linked list corruption due to an unusual root cause this could lead to local escalation of privilege in the kernel with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel remediation undefined ac the security hub finding is resolved or suppressed indicated by a workflow status of resolved or suppressed
| 0
|
244,240
| 18,751,679,130
|
IssuesEvent
|
2021-11-05 03:21:59
|
statnet/ergm
|
https://api.github.com/repos/statnet/ergm
|
closed
|
Apply the @templateVar name approach currently used for ergmProposal to other termalikes where possible.
|
Component: Documentation Language: Roxygen
|
This should be done in:
- [x] `ergm`
- [x] `ergm.count`
- [x] `ergm.rank`
- [x] `tergm`
- [x] `ergm.multi`
Some conflicts may arise in the `Sum()`/`sum()` terms.
|
1.0
|
Apply the @templateVar name approach currently used for ergmProposal to other termalikes where possible. - This should be done in:
- [x] `ergm`
- [x] `ergm.count`
- [x] `ergm.rank`
- [x] `tergm`
- [x] `ergm.multi`
Some conflicts may arise in the `Sum()`/`sum()` terms.
|
non_process
|
apply the templatevar name approach currently used for ergmproposal to other termalikes where possible this should be done in ergm ergm count ergm rank tergm ergm multi some conflicts may arise in the sum sum terms
| 0
|
440,249
| 30,737,367,451
|
IssuesEvent
|
2023-07-28 08:44:09
|
rafaelsetragni/awesome_notifications
|
https://api.github.com/repos/rafaelsetragni/awesome_notifications
|
closed
|
scheduling notifications on Android without USE_EXACT_ALARM permission
|
documentation enhancement closed by bot in discussion
|
Is it possible to schedule a notification at an inexact time so that you don't have to add either the android.permission.SCHEDULE_EXACT_ALARM or android.permission.USE_EXACT_ALARM permission to your AndroidManifest.xml file?
https://github.com/rafaelsetragni/awesome_notifications#-scheduling-a-notification discusses needing this permission for precise notification times, but I did not find details on how to schedule a notification for an imprecise time.
Currently, when I use AwesomeNotifications().createNotification specifying
`schedule: NotificationCalendar.fromDate(
preciseAlarm: false, date: DateTime(now.year, now.month, now.day, now.hour, now.minute+1))`
I get an error saying my project "needs to hold android.permission.SCHEDULE_EXACT_ALARM or android.permission.USE_EXACT_ALARM to set exact alarms."
|
1.0
|
scheduling notifications on Android without USE_EXACT_ALARM permission - Is it possible to schedule a notification at an inexact time so that you don't have to add either the android.permission.SCHEDULE_EXACT_ALARM or android.permission.USE_EXACT_ALARM permission to your AndroidManifest.xml file?
https://github.com/rafaelsetragni/awesome_notifications#-scheduling-a-notification discusses needing this permission for precise notification times, but I did not find details on how to schedule a notification for an imprecise time.
Currently, when I use AwesomeNotifications().createNotification specifying
`schedule: NotificationCalendar.fromDate(
preciseAlarm: false, date: DateTime(now.year, now.month, now.day, now.hour, now.minute+1))`
I get an error saying my project "needs to hold android.permission.SCHEDULE_EXACT_ALARM or android.permission.USE_EXACT_ALARM to set exact alarms."
|
non_process
|
scheduling notifications on android without use exact alarm permission is it possible to schedule a notification at an inexact time so that you don t have to add either the android permission schedule exact alarm or android permission use exact alarm permission to your androidmanifest xml file discusses needing this permission for precise notification times but i did not find details on how to schedule a notification for an imprecise time currently when i use awesomenotifications createnotification specifying schedule notificationcalendar fromdate precisealarm false date datetime now year now month now day now hour now minute i get an error saying my project needs to hold android permission schedule exact alarm or android permission use exact alarm to set exact alarms
| 0
|
7,642
| 3,105,938,561
|
IssuesEvent
|
2015-09-01 00:06:28
|
whoshuu/cpr
|
https://api.github.com/repos/whoshuu/cpr
|
opened
|
Implement versioning for github pages documentation
|
Documentation
|
This involves enforcing a new directory structure that jekyll is made aware of for holding older `.md`'s that would be used to generate older versions of the documentation.
|
1.0
|
Implement versioning for github pages documentation - This involves enforcing a new directory structure that jekyll is made aware of for holding older `.md`'s that would be used to generate older versions of the documentation.
|
non_process
|
implement versioning for github pages documentation this involves enforcing a new directory structure that jekyll is made aware of for holding older md s that would be used to generate older versions of the documentation
| 0
|
150,684
| 11,980,924,979
|
IssuesEvent
|
2020-04-07 10:10:27
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
e2e test with --https option does not pass
|
area/networking area/test-and-release kind/bug
|
/area networking
/area test-and-release
## What version of Knative?
v0.13.0-75-g7b1931a9b
## Expected Behavior
e2e test with `--https` option should pass all e2e test.
## Actual Behavior
Please refer to https://github.com/knative/serving/pull/7252. Current e2e test's script missed `--https` option so e2e does not run with the option. (The test grid https://testgrid.knative.dev/serving#https is green but it does run with https option.)
## Steps to Reproduce the Problem
For example, run following test.
```
$ go test -tags=e2e -count=1 -v ./test/e2e/... -run "TestWebSocketBlueGreenRoute" --https
```
It failed on local as well.
<details>
<summary> result of `TestWebSocketBlueGreenRoute` </summary>
```
$ go test -tags=e2e -count=1 -v ./test/e2e/... -run "TestWebSocketBlueGreenRoute" --https
2020/03/15 14:46:23 Using '1584251183441200619' to seed the random number generator
=== RUN TestWebSocketBlueGreenRoute
=== PAUSE TestWebSocketBlueGreenRoute
=== CONT TestWebSocketBlueGreenRoute
--- FAIL: TestWebSocketBlueGreenRoute (221.13s)
websocket_test.go:199: Creating a new Service in runLatest
service.go:129: Creating a new Service. service web-socket-blue-green-route-regmyfwa
crd.go:35: resource {<nil> <nil> <*>{&TypeMeta{Kind:,APIVersion:,} &ObjectMeta{Name:web-socket-blue-green-route-regmyfwa,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},} {0 <nil> <nil> <nil> <nil> {0 <nil> <nil> <*>&ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}} {0 <nil>}} {{0 <nil>} {<nil> <nil> <nil>} { }}} <nil>}
service.go:144: Waiting for Service to transition to Ready. service web-socket-blue-green-route-regmyfwa
service.go:149: Checking to ensure Service Status is populated for Ready service
service.go:178: Getting latest objects Created by Service
service.go:181: Successfully created Service web-socket-blue-green-route-regmyfwa
websocket_test.go:213: Updating the Service to use a different suffix
crd.go:35: resource {<nil> <nil> <*>{&TypeMeta{Kind:,APIVersion:,} &ObjectMeta{Name:web-socket-blue-green-route-regmyfwa,GenerateName:,Namespace:serving-tests,SelfLink:/apis/serving.knative.dev/v1alpha1/namespaces/serving-tests/services/web-socket-blue-green-route-regmyfwa,UID:3ce1f529-d8de-49d7-a3e3-fa7ce865f646,ResourceVersion:21157,Generation:1,CreationTimestamp:2020-03-15 14:46:23 +0900 JST,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{serving.knative.dev/creator: admin,serving.knative.dev/lastModifier: admin,},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},} {0 <nil> <nil> <nil> <nil> {0 <nil> <nil> <*>&ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}} {0 [{ { <*>true <*>100 <nil>}}]}} {{1 [{ConfigurationsReady True {2020-03-15 14:46:31 +0900 JST} } {Ready True {2020-03-15 14:46:32 +0900 JST} } {RoutesReady True {2020-03-15 14:46:32 +0900 JST} }]} {<*>http://web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io <*>{{<*>http://web-socket-blue-green-route-regmyfwa.serving-tests.svc.cluster.local} } [{ { web-socket-blue-green-route-regmyfwa-f5fwl <*>true <*>100 <nil>}}]} {web-socket-blue-green-route-regmyfwa-f5fwl web-socket-blue-green-route-regmyfwa-f5fwl}}} <nil>}
websocket_test.go:224: Since the Service was updated a new Revision will be created and the Service will be updated
websocket_test.go:230: Updating RouteSpec
websocket_test.go:249: Wait for the service domains to be ready
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60826->13.228.195.154:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60832->13.228.195.154:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:44978->52.77.9.96:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60864->13.228.195.154:80: i/o timeout
websocket_test.go:275: Error initializing WS connection: timed out waiting for the condition
FAIL
FAIL knative.dev/serving/test/e2e 221.147s
2020/03/15 14:46:23 Using '1584251183197022122' to seed the random number generator
testing: warning: no tests to run
PASS
ok knative.dev/serving/test/e2e/autotls 0.021s [no tests to run]
? knative.dev/serving/test/e2e/autotls/config [no test files]
? knative.dev/serving/test/e2e/autotls/config/disablenscert [no test files]
? knative.dev/serving/test/e2e/autotls/config/dnscleanup [no test files]
? knative.dev/serving/test/e2e/autotls/config/dnssetup [no test files]
2020/03/15 14:46:23 Using '1584251183444471891' to seed the random number generator
testing: warning: no tests to run
PASS
ok knative.dev/serving/test/e2e/istio 0.015s [no tests to run]
FAIL
```
</detail>
|
1.0
|
e2e test with --https option does not pass - /area networking
/area test-and-release
## What version of Knative?
v0.13.0-75-g7b1931a9b
## Expected Behavior
e2e test with `--https` option should pass all e2e test.
## Actual Behavior
Please refer to https://github.com/knative/serving/pull/7252. Current e2e test's script missed `--https` option so e2e does not run with the option. (The test grid https://testgrid.knative.dev/serving#https is green but it does run with https option.)
## Steps to Reproduce the Problem
For example, run following test.
```
$ go test -tags=e2e -count=1 -v ./test/e2e/... -run "TestWebSocketBlueGreenRoute" --https
```
It failed on local as well.
<details>
<summary> result of `TestWebSocketBlueGreenRoute` </summary>
```
$ go test -tags=e2e -count=1 -v ./test/e2e/... -run "TestWebSocketBlueGreenRoute" --https
2020/03/15 14:46:23 Using '1584251183441200619' to seed the random number generator
=== RUN TestWebSocketBlueGreenRoute
=== PAUSE TestWebSocketBlueGreenRoute
=== CONT TestWebSocketBlueGreenRoute
--- FAIL: TestWebSocketBlueGreenRoute (221.13s)
websocket_test.go:199: Creating a new Service in runLatest
service.go:129: Creating a new Service. service web-socket-blue-green-route-regmyfwa
crd.go:35: resource {<nil> <nil> <*>{&TypeMeta{Kind:,APIVersion:,} &ObjectMeta{Name:web-socket-blue-green-route-regmyfwa,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},} {0 <nil> <nil> <nil> <nil> {0 <nil> <nil> <*>&ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}} {0 <nil>}} {{0 <nil>} {<nil> <nil> <nil>} { }}} <nil>}
service.go:144: Waiting for Service to transition to Ready. service web-socket-blue-green-route-regmyfwa
service.go:149: Checking to ensure Service Status is populated for Ready service
service.go:178: Getting latest objects Created by Service
service.go:181: Successfully created Service web-socket-blue-green-route-regmyfwa
websocket_test.go:213: Updating the Service to use a different suffix
crd.go:35: resource {<nil> <nil> <*>{&TypeMeta{Kind:,APIVersion:,} &ObjectMeta{Name:web-socket-blue-green-route-regmyfwa,GenerateName:,Namespace:serving-tests,SelfLink:/apis/serving.knative.dev/v1alpha1/namespaces/serving-tests/services/web-socket-blue-green-route-regmyfwa,UID:3ce1f529-d8de-49d7-a3e3-fa7ce865f646,ResourceVersion:21157,Generation:1,CreationTimestamp:2020-03-15 14:46:23 +0900 JST,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{serving.knative.dev/creator: admin,serving.knative.dev/lastModifier: admin,},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},} {0 <nil> <nil> <nil> <nil> {0 <nil> <nil> <*>&ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}} {0 [{ { <*>true <*>100 <nil>}}]}} {{1 [{ConfigurationsReady True {2020-03-15 14:46:31 +0900 JST} } {Ready True {2020-03-15 14:46:32 +0900 JST} } {RoutesReady True {2020-03-15 14:46:32 +0900 JST} }]} {<*>http://web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io <*>{{<*>http://web-socket-blue-green-route-regmyfwa.serving-tests.svc.cluster.local} } [{ { web-socket-blue-green-route-regmyfwa-f5fwl <*>true <*>100 <nil>}}]} {web-socket-blue-green-route-regmyfwa-f5fwl web-socket-blue-green-route-regmyfwa-f5fwl}}} <nil>}
websocket_test.go:224: Since the Service was updated a new Revision will be created and the Service will be updated
websocket_test.go:230: Updating RouteSpec
websocket_test.go:249: Wait for the service domains to be ready
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60826->13.228.195.154:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60832->13.228.195.154:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:44978->52.77.9.96:80: i/o timeout
websocket.go:64: Connecting using websocket: url=ws://afdd6c2d0184c4b31854b6904b61124b-13031614.ap-southeast-1.elb.amazonaws.com/, host=green-web-socket-blue-green-route-regmyfwa.serving-tests.13.228.195.154.xip.io
websocket.go:73: Connection failed: read tcp 10.208.89.76:60864->13.228.195.154:80: i/o timeout
websocket_test.go:275: Error initializing WS connection: timed out waiting for the condition
FAIL
FAIL knative.dev/serving/test/e2e 221.147s
2020/03/15 14:46:23 Using '1584251183197022122' to seed the random number generator
testing: warning: no tests to run
PASS
ok knative.dev/serving/test/e2e/autotls 0.021s [no tests to run]
? knative.dev/serving/test/e2e/autotls/config [no test files]
? knative.dev/serving/test/e2e/autotls/config/disablenscert [no test files]
? knative.dev/serving/test/e2e/autotls/config/dnscleanup [no test files]
? knative.dev/serving/test/e2e/autotls/config/dnssetup [no test files]
2020/03/15 14:46:23 Using '1584251183444471891' to seed the random number generator
testing: warning: no tests to run
PASS
ok knative.dev/serving/test/e2e/istio 0.015s [no tests to run]
FAIL
```
</detail>
|
non_process
|
test with https option does not pass area networking area test and release what version of knative expected behavior test with https option should pass all test actual behavior please refer to current test s script missed https option so does not run with the option the test grid is green but it does run with https option steps to reproduce the problem for example run following test go test tags count v test run testwebsocketbluegreenroute https it failed on local as well result of testwebsocketbluegreenroute go test tags count v test run testwebsocketbluegreenroute https using to seed the random number generator run testwebsocketbluegreenroute pause testwebsocketbluegreenroute cont testwebsocketbluegreenroute fail testwebsocketbluegreenroute websocket test go creating a new service in runlatest service go creating a new service service web socket blue green route regmyfwa crd go resource typemeta kind apiversion objectmeta name web socket blue green route regmyfwa generatename namespace selflink uid resourceversion generation creationtimestamp utc deletiontimestamp deletiongraceperiodseconds nil labels map string annotations map string ownerreferences ownerreference finalizers clustername managedfields managedfieldsentry objectmeta name generatename namespace selflink uid resourceversion generation creationtimestamp utc deletiontimestamp deletiongraceperiodseconds nil labels map string annotations map string ownerreferences ownerreference finalizers clustername managedfields managedfieldsentry service go waiting for service to transition to ready service web socket blue green route regmyfwa service go checking to ensure service status is populated for ready service service go getting latest objects created by service service go successfully created service web socket blue green route regmyfwa websocket test go updating the service to use a different suffix crd go resource typemeta kind apiversion objectmeta name web socket blue green route regmyfwa generatename namespace serving tests selflink apis serving knative dev namespaces serving tests services web socket blue green route regmyfwa uid resourceversion generation creationtimestamp jst deletiontimestamp deletiongraceperiodseconds nil labels map string annotations map string serving knative dev creator admin serving knative dev lastmodifier admin ownerreferences ownerreference finalizers clustername managedfields managedfieldsentry objectmeta name generatename namespace selflink uid resourceversion generation creationtimestamp utc deletiontimestamp deletiongraceperiodseconds nil labels map string annotations map string ownerreferences ownerreference finalizers clustername managedfields managedfieldsentry web socket blue green route regmyfwa web socket blue green route regmyfwa websocket test go since the service was updated a new revision will be created and the service will be updated websocket test go updating routespec websocket test go wait for the service domains to be ready websocket go connecting using websocket url ws ap southeast elb amazonaws com host green web socket blue green route regmyfwa serving tests xip io websocket go connection failed read tcp i o timeout websocket go connecting using websocket url ws ap southeast elb amazonaws com host green web socket blue green route regmyfwa serving tests xip io websocket go connection failed read tcp i o timeout websocket go connecting using websocket url ws ap southeast elb amazonaws com host green web socket blue green route regmyfwa serving tests xip io websocket go connection failed read tcp i o timeout websocket go connecting using websocket url ws ap southeast elb amazonaws com host green web socket blue green route regmyfwa serving tests xip io websocket go connection failed read tcp i o timeout websocket test go error initializing ws connection timed out waiting for the condition fail fail knative dev serving test using to seed the random number generator testing warning no tests to run pass ok knative dev serving test autotls knative dev serving test autotls config knative dev serving test autotls config disablenscert knative dev serving test autotls config dnscleanup knative dev serving test autotls config dnssetup using to seed the random number generator testing warning no tests to run pass ok knative dev serving test istio fail
| 0
|
6,814
| 9,957,545,608
|
IssuesEvent
|
2019-07-05 17:15:49
|
cropmapteam/Scotland-crop-map
|
https://api.github.com/repos/cropmapteam/Scotland-crop-map
|
closed
|
Plan analytical pipeline & packages
|
Machine Learning dataset process
|
Plan the analytical pipeline in order to decide which steps can be done in which package (R/Python/QGIS) and ensure compatibility
|
1.0
|
Plan analytical pipeline & packages - Plan the analytical pipeline in order to decide which steps can be done in which package (R/Python/QGIS) and ensure compatibility
|
process
|
plan analytical pipeline packages plan the analytical pipeline in order to decide which steps can be done in which package r python qgis and ensure compatibility
| 1
|
8,910
| 12,014,986,874
|
IssuesEvent
|
2020-04-10 12:56:39
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Bisq creates an invalid timelock payout transaction in certain circumstances
|
a:bug in:trade-process is:critical
|
### Description
Bisq creates an invalid timelock payout transaction in certain circumstances
#### Version
v1.2.4
### Steps to reproduce
Still investigating.
### Expected behaviour
Payout TX is valid and can be broadcast to the Bitcoin network anytime after the lock_time.
### Actual behaviour
Payout TX is rejected by the Bitcoin network for "bad-txns-in-belowout" error, which means the sum of the outputs is greater than the sum of the inputs.
#### Device or machine
Linux
#### Additional info
A user requested support for a closed trade which should have been donated to the Bisq donation address, but was never donated. After manually dumping the payout TX, I found it was invalid due to the following outputs > inputs. Perhaps an input is missing from the calculation
```
"inputs": [
{
...
"output_value": 7753800,
...
}
],
"outputs": [
{
"addresses": [
"3EtUWqsGThPtjwUczw27YCo6EWvQdaPUyp"
],
...
"value": 8750000
}
],
```
|
1.0
|
Bisq creates an invalid timelock payout transaction in certain circumstances - ### Description
Bisq creates an invalid timelock payout transaction in certain circumstances
#### Version
v1.2.4
### Steps to reproduce
Still investigating.
### Expected behaviour
Payout TX is valid and can be broadcast to the Bitcoin network anytime after the lock_time.
### Actual behaviour
Payout TX is rejected by the Bitcoin network for "bad-txns-in-belowout" error, which means the sum of the outputs is greater than the sum of the inputs.
#### Device or machine
Linux
#### Additional info
A user requested support for a closed trade which should have been donated to the Bisq donation address, but was never donated. After manually dumping the payout TX, I found it was invalid due to the following outputs > inputs. Perhaps an input is missing from the calculation
```
"inputs": [
{
...
"output_value": 7753800,
...
}
],
"outputs": [
{
"addresses": [
"3EtUWqsGThPtjwUczw27YCo6EWvQdaPUyp"
],
...
"value": 8750000
}
],
```
|
process
|
bisq creates an invalid timelock payout transaction in certain circumstances description bisq creates an invalid timelock payout transaction in certain circumstances version steps to reproduce still investigating expected behaviour payout tx is valid and can be broadcast to the bitcoin network anytime after the lock time actual behaviour payout tx is rejected by the bitcoin network for bad txns in belowout error which means the sum of the outputs is greater than the sum of the inputs device or machine linux additional info a user requested support for a closed trade which should have been donated to the bisq donation address but was never donated after manually dumping the payout tx i found it was invalid due to the following outputs inputs perhaps an input is missing from the calculation inputs output value outputs addresses value
| 1
|
84,608
| 7,928,729,769
|
IssuesEvent
|
2018-07-06 12:48:34
|
ArkEcosystem/core
|
https://api.github.com/repos/ArkEcosystem/core
|
opened
|
Unify tests
|
development tests
|
Currently we don't have a unified way to test all the packages: some of them are using `core-test-utils` or have received more love than others.
- [ ] Unify tests (share more tools between packages and use a standard way of mocking, expecting, etc.)
|
1.0
|
Unify tests - Currently we don't have a unified way to test all the packages: some of them are using `core-test-utils` or have received more love than others.
- [ ] Unify tests (share more tools between packages and use a standard way of mocking, expecting, etc.)
|
non_process
|
unify tests currently we don t have a unified way to test all the packages some of them are using core test utils or have received more love than others unify tests share more tools between packages and use a standard way of mocking expecting etc
| 0
|
879
| 3,343,481,877
|
IssuesEvent
|
2015-11-15 14:57:56
|
luc-github/Repetier-Firmware-0.92
|
https://api.github.com/repos/luc-github/Repetier-Firmware-0.92
|
opened
|
Sync 0.92.6 version
|
enhancement Waiting to be processed
|
0.92.6 is ready and now it is time to sync (starting this 16th of Nov) hope to done it quickly.
I have created a branch that will use travis-ci to compile all models of DAVINCI using Arduino 1.6.5 and DUE module 1.6.4 to have a global view - first commits will likely be failed but it will help to track the sync.
FYI the test/adjustment for Arduino 1.6.6 and DUE 1.6.6 will come after - no need to ask in advance, as first target is to make 0.92.6 working in an environment which is known to be working = IDE 1.6.5/DUE 1.6.4
Several bug fixes (better native usb support, autolevel,etc..) and enhancement ( watchdog, multi language at runtime, better support for Jam sensor)
Will try to integrate fully the "wizard mode" and ensure it will work with Davinci as before it was only working with encoder and overlaped Davinci already existing feature like load/unload and out of filament sensor detection.
|
1.0
|
Sync 0.92.6 version - 0.92.6 is ready and now it is time to sync (starting this 16th of Nov) hope to done it quickly.
I have created a branch that will use travis-ci to compile all models of DAVINCI using Arduino 1.6.5 and DUE module 1.6.4 to have a global view - first commits will likely be failed but it will help to track the sync.
FYI the test/adjustment for Arduino 1.6.6 and DUE 1.6.6 will come after - no need to ask in advance, as first target is to make 0.92.6 working in an environment which is known to be working = IDE 1.6.5/DUE 1.6.4
Several bug fixes (better native usb support, autolevel,etc..) and enhancement ( watchdog, multi language at runtime, better support for Jam sensor)
Will try to integrate fully the "wizard mode" and ensure it will work with Davinci as before it was only working with encoder and overlaped Davinci already existing feature like load/unload and out of filament sensor detection.
|
process
|
sync version is ready and now it is time to sync starting this of nov hope to done it quickly i have created a branch that will use travis ci to compile all models of davinci using arduino and due module to have a global view first commits will likely be failed but it will help to track the sync fyi the test adjustment for arduino and due will come after no need to ask in advance as first target is to make working in an environment which is known to be working ide due several bug fixes better native usb support autolevel etc and enhancement watchdog multi language at runtime better support for jam sensor will try to integrate fully the wizard mode and ensure it will work with davinci as before it was only working with encoder and overlaped davinci already existing feature like load unload and out of filament sensor detection
| 1
|
4,584
| 7,428,382,049
|
IssuesEvent
|
2018-03-24 00:56:33
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Document TTL?
|
cxp in-process product-question search triaged
|
Is it possible to define TTL for documents in an index so that old data is automatically purged out index? I am trying to limit the data which is indexed in an index by age, is there any other approach possible if my data is ever increasing and I just need to keep index of last week's or month's data only?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d60271e5-861b-f889-62c1-b6c1bfe4cec7
* Version Independent ID: 83488bc0-05f4-393a-005f-8c484c8de688
* Content: [Data import in Azure Search](https://docs.microsoft.com/en-us/azure/search/search-what-is-data-import)
* Content Source: [articles/search/search-what-is-data-import.md](https://github.com/Microsoft/azure-docs/blob/master/articles/search/search-what-is-data-import.md)
* Service: **search**
* GitHub Login: @ashmaka
* Microsoft Alias: **ashmaka**
|
1.0
|
Document TTL? - Is it possible to define TTL for documents in an index so that old data is automatically purged out index? I am trying to limit the data which is indexed in an index by age, is there any other approach possible if my data is ever increasing and I just need to keep index of last week's or month's data only?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d60271e5-861b-f889-62c1-b6c1bfe4cec7
* Version Independent ID: 83488bc0-05f4-393a-005f-8c484c8de688
* Content: [Data import in Azure Search](https://docs.microsoft.com/en-us/azure/search/search-what-is-data-import)
* Content Source: [articles/search/search-what-is-data-import.md](https://github.com/Microsoft/azure-docs/blob/master/articles/search/search-what-is-data-import.md)
* Service: **search**
* GitHub Login: @ashmaka
* Microsoft Alias: **ashmaka**
|
process
|
document ttl is it possible to define ttl for documents in an index so that old data is automatically purged out index i am trying to limit the data which is indexed in an index by age is there any other approach possible if my data is ever increasing and i just need to keep index of last week s or month s data only document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service search github login ashmaka microsoft alias ashmaka
| 1
|
18,814
| 24,713,686,293
|
IssuesEvent
|
2022-10-20 04:26:22
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
reopened
|
Check Validity tool does not free memory after use
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
The "Check validity" tool does not free the memory it temporarily uses.
It stays in use even if a new project is loaded.
### Steps to reproduce the issue
1. Load a reasonably big vector file, ~50MB should make a good test case
1. Open the "Check validity" tool
1. Set all outputs to "Skip Output"
1. Choose the QGIS algorithm
1. Check the memory usage
1. Run the tool
1. Check the memory usage
1. Close the tool
1. Check the memory usage
You will notice the memory usage having grown after running the tool and not decreasing after it is closed. In my tests it grew ~50MB per run, each run increased it.
1. Close the project alltogether
1. Check the memory usage
The memory usage will not have decreased.
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.27.0-Master | QGIS code revision | 2e622be6f9c
-- | -- | -- | --
Qt version | 5.15.5
Python version | 3.10.5
Compiled against GDAL/OGR | 3.6.0dev-030ff40cf8-dirty | Running against GDAL/OGR | 3.6.0dev-f033e1fa1e-dirty
PROJ version | 9.0.1
EPSG Registry database version | v10.064 (2022-05-19)
GEOS version | 3.11.0-CAPI-1.17.0
SQLite version | 3.39.1
PDAL version | 2.4.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.2.0
QScintilla2 version | 2.13.3
OS version | Arch Linux
</body></html>
and also
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.26.1-Buenos Aires | QGIS code branch | Release 3.26
-- | -- | -- | --
Qt version | 5.15.5
Python version | 3.10.5
Compiled against GDAL/OGR | 3.5.1 | Running against GDAL/OGR | 3.6.0dev-f033e1fa1e-dirty
PROJ version | 9.0.1
EPSG Registry database version | v10.064 (2022-05-19)
GEOS version | 3.11.0-CAPI-1.17.0
SQLite version | 3.39.1
Compiled against PDAL | 2.4.2 | Running against PDAL | 2.4.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.2.0
QScintilla2 version | 2.13.3
OS version | Arch Linux
</body></html>
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
You can do the same with the GEOS algorithm. This seems to increase memory usage by about 1MB per run in my tests. Considerably less but it still does not seem to get freed. I did not look at this well, it might be a red herring and only the QGIS algorithm might be affected.
|
1.0
|
Check Validity tool does not free memory after use - ### What is the bug or the crash?
The "Check validity" tool does not free the memory it temporarily uses.
It stays in use even if a new project is loaded.
### Steps to reproduce the issue
1. Load a reasonably big vector file, ~50MB should make a good test case
1. Open the "Check validity" tool
1. Set all outputs to "Skip Output"
1. Choose the QGIS algorithm
1. Check the memory usage
1. Run the tool
1. Check the memory usage
1. Close the tool
1. Check the memory usage
You will notice the memory usage having grown after running the tool and not decreasing after it is closed. In my tests it grew ~50MB per run, each run increased it.
1. Close the project alltogether
1. Check the memory usage
The memory usage will not have decreased.
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.27.0-Master | QGIS code revision | 2e622be6f9c
-- | -- | -- | --
Qt version | 5.15.5
Python version | 3.10.5
Compiled against GDAL/OGR | 3.6.0dev-030ff40cf8-dirty | Running against GDAL/OGR | 3.6.0dev-f033e1fa1e-dirty
PROJ version | 9.0.1
EPSG Registry database version | v10.064 (2022-05-19)
GEOS version | 3.11.0-CAPI-1.17.0
SQLite version | 3.39.1
PDAL version | 2.4.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.2.0
QScintilla2 version | 2.13.3
OS version | Arch Linux
</body></html>
and also
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.26.1-Buenos Aires | QGIS code branch | Release 3.26
-- | -- | -- | --
Qt version | 5.15.5
Python version | 3.10.5
Compiled against GDAL/OGR | 3.5.1 | Running against GDAL/OGR | 3.6.0dev-f033e1fa1e-dirty
PROJ version | 9.0.1
EPSG Registry database version | v10.064 (2022-05-19)
GEOS version | 3.11.0-CAPI-1.17.0
SQLite version | 3.39.1
Compiled against PDAL | 2.4.2 | Running against PDAL | 2.4.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.2.0
QScintilla2 version | 2.13.3
OS version | Arch Linux
</body></html>
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
You can do the same with the GEOS algorithm. This seems to increase memory usage by about 1MB per run in my tests. Considerably less but it still does not seem to get freed. I did not look at this well, it might be a red herring and only the QGIS algorithm might be affected.
|
process
|
check validity tool does not free memory after use what is the bug or the crash the check validity tool does not free the memory it temporarily uses it stays in use even if a new project is loaded steps to reproduce the issue load a reasonably big vector file should make a good test case open the check validity tool set all outputs to skip output choose the qgis algorithm check the memory usage run the tool check the memory usage close the tool check the memory usage you will notice the memory usage having grown after running the tool and not decreasing after it is closed in my tests it grew per run each run increased it close the project alltogether check the memory usage the memory usage will not have decreased versions doctype html public dtd html en qgis version master qgis code revision qt version python version compiled against gdal ogr dirty running against gdal ogr dirty proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version arch linux and also doctype html public dtd html en qgis version buenos aires qgis code branch release qt version python version compiled against gdal ogr running against gdal ogr dirty proj version epsg registry database version geos version capi sqlite version compiled against pdal running against pdal postgresql client version unknown spatialite version qwt version version os version arch linux supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context you can do the same with the geos algorithm this seems to increase memory usage by about per run in my tests considerably less but it still does not seem to get freed i did not look at this well it might be a red herring and only the qgis algorithm might be affected
| 1
|
5,979
| 8,797,338,109
|
IssuesEvent
|
2018-12-23 18:21:15
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Different behavior of exit listener of spawned process
|
child_process question
|
* **Version**: v8.2.0 and newer
* **Platform**: Linux 4.2.0-27-generic #32~14.04.1-Ubuntu x86_64 GNU/Linux
* **Subsystem**: child_process
Process started by `child_process` module can start its own child process. For example `tar` can start child process if we use it in concrete directory. And if we choose unexisting directory for archive destination it can fail in different way:
```
$ tar -C /some/directory -cvzf /path/to/archive.tgz fileToArchive
tar (child): /path/to/archive.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
example.js
$ echo $?
141
# and sometimes (1 from 10 or more runs) it can receive exit code 2 and finish correctly
$ tar -C /some/directory -cvzf /path/to/archive.tgz fileToArchive
fileToArchive
tar (child): /path/to/archive.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
$ echo $?
2
```
The problem here is that on NodeJS <8.2.0 it's always exit with code 2 because child process emit this exit code.
For example this script `example.js`:
```js
const spawn = require('child_process').spawn;
const Steppy = require('twostep').Steppy;
const pathUtils = require('path');
const assert = require('assert');
const exec = function(cmd, args) {
return new Promise((resolve, reject) => {
cmdSpawn = spawn(
cmd,
args,
{stdio: ['pipe', 'pipe', 'pipe']}
);
let stdoutData = '';
cmdSpawn.stdout.on('data', (data) => {
stdoutData += data;
});
let stderrData = '';
cmdSpawn.stderr.on('data', (data) => {
stderrData += data;
});
cmdSpawn.on('exit', (exitCode, signal) => {
console.log(exitCode, signal)
resolve({
stdoutData,
stderrData,
exitCode
});
});
cmdSpawn.on('error', (err) => {
promise.reject(new Error(err));
});
});
};
exec('tar', [
'-C',
__dirname,
'-cvzf',
pathUtils.join(__dirname, '/unexisted/path/archive.tgz'),
'example.js'
])
.then((result) => {
console.log(result);
assert.equal(result.exitCode, 2);
})
.catch((err) => {
console.error(err.stack || err);
});
```
will receive exit code 2 always on NodeJS 8.1.4 and this result on 8.2.0:
```
$ node example.js
null 'SIGPIPE'
{ stdoutData: 'example.js\n',
stderrData: 'tar (child): /home/robingood/work/gitlab/parking/price-calculator/scripts/unexisted/path/archive.tgz: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n',
exitCode: null }
AssertionError [ERR_ASSERTION]: null == 2
at exec.then (/home/robingood/work/gitlab/parking/price-calculator/scripts/example.js:50:10)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:189:7)
```
Why it happens? Is it correct behavior? What is right workaround for this case: handle signal in `exit` listener?
|
1.0
|
Different behavior of exit listener of spawned process - * **Version**: v8.2.0 and newer
* **Platform**: Linux 4.2.0-27-generic #32~14.04.1-Ubuntu x86_64 GNU/Linux
* **Subsystem**: child_process
Process started by `child_process` module can start its own child process. For example `tar` can start child process if we use it in concrete directory. And if we choose unexisting directory for archive destination it can fail in different way:
```
$ tar -C /some/directory -cvzf /path/to/archive.tgz fileToArchive
tar (child): /path/to/archive.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
example.js
$ echo $?
141
# and sometimes (1 from 10 or more runs) it can receive exit code 2 and finish correctly
$ tar -C /some/directory -cvzf /path/to/archive.tgz fileToArchive
fileToArchive
tar (child): /path/to/archive.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
$ echo $?
2
```
The problem here is that on NodeJS <8.2.0 it's always exit with code 2 because child process emit this exit code.
For example this script `example.js`:
```js
const spawn = require('child_process').spawn;
const Steppy = require('twostep').Steppy;
const pathUtils = require('path');
const assert = require('assert');
const exec = function(cmd, args) {
return new Promise((resolve, reject) => {
cmdSpawn = spawn(
cmd,
args,
{stdio: ['pipe', 'pipe', 'pipe']}
);
let stdoutData = '';
cmdSpawn.stdout.on('data', (data) => {
stdoutData += data;
});
let stderrData = '';
cmdSpawn.stderr.on('data', (data) => {
stderrData += data;
});
cmdSpawn.on('exit', (exitCode, signal) => {
console.log(exitCode, signal)
resolve({
stdoutData,
stderrData,
exitCode
});
});
cmdSpawn.on('error', (err) => {
promise.reject(new Error(err));
});
});
};
exec('tar', [
'-C',
__dirname,
'-cvzf',
pathUtils.join(__dirname, '/unexisted/path/archive.tgz'),
'example.js'
])
.then((result) => {
console.log(result);
assert.equal(result.exitCode, 2);
})
.catch((err) => {
console.error(err.stack || err);
});
```
will receive exit code 2 always on NodeJS 8.1.4 and this result on 8.2.0:
```
$ node example.js
null 'SIGPIPE'
{ stdoutData: 'example.js\n',
stderrData: 'tar (child): /home/robingood/work/gitlab/parking/price-calculator/scripts/unexisted/path/archive.tgz: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n',
exitCode: null }
AssertionError [ERR_ASSERTION]: null == 2
at exec.then (/home/robingood/work/gitlab/parking/price-calculator/scripts/example.js:50:10)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:189:7)
```
Why it happens? Is it correct behavior? What is right workaround for this case: handle signal in `exit` listener?
|
process
|
different behavior of exit listener of spawned process version and newer platform linux generic ubuntu gnu linux subsystem child process process started by child process module can start its own child process for example tar can start child process if we use it in concrete directory and if we choose unexisting directory for archive destination it can fail in different way tar c some directory cvzf path to archive tgz filetoarchive tar child path to archive tgz cannot open no such file or directory tar child error is not recoverable exiting now example js echo and sometimes from or more runs it can receive exit code and finish correctly tar c some directory cvzf path to archive tgz filetoarchive filetoarchive tar child path to archive tgz cannot open no such file or directory tar child error is not recoverable exiting now tar child returned status tar error is not recoverable exiting now echo the problem here is that on nodejs it s always exit with code because child process emit this exit code for example this script example js js const spawn require child process spawn const steppy require twostep steppy const pathutils require path const assert require assert const exec function cmd args return new promise resolve reject cmdspawn spawn cmd args stdio let stdoutdata cmdspawn stdout on data data stdoutdata data let stderrdata cmdspawn stderr on data data stderrdata data cmdspawn on exit exitcode signal console log exitcode signal resolve stdoutdata stderrdata exitcode cmdspawn on error err promise reject new error err exec tar c dirname cvzf pathutils join dirname unexisted path archive tgz example js then result console log result assert equal result exitcode catch err console error err stack err will receive exit code always on nodejs and this result on node example js null sigpipe stdoutdata example js n stderrdata tar child home robingood work gitlab parking price calculator scripts unexisted path archive tgz cannot open no such file or directory ntar child error is not recoverable exiting now n exitcode null assertionerror null at exec then home robingood work gitlab parking price calculator scripts example js at at process tickcallback internal process next tick js why it happens is it correct behavior what is right workaround for this case handle signal in exit listener
| 1
|
16,362
| 21,047,547,732
|
IssuesEvent
|
2022-03-31 17:28:08
|
pycaret/pycaret
|
https://api.github.com/repos/pycaret/pycaret
|
closed
|
[BUG] isocalendar().week not works
|
bug installation preprocessing
|
**Describe the bug**
FutureWarning: Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead.
**To Reproduce**
My code works --> dados['semana'] = dados['Data'].dt.week
Error after change --> dados['semana'] = dados['Data'].dt.isocalendar().week
**Expected behavior**
No errors
**Additional context**
I have checked, error appear after s = setup(....)
Variables are typed, but variable changed is empty type
After type ENTER show error:
AttributeError: 'float' object has no attribute 'shape'
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
import pycaret
pycaret.__version__
-->
3.00
</details>
<!-- Thanks for contributing! -->
|
1.0
|
[BUG] isocalendar().week not works - **Describe the bug**
FutureWarning: Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead.
**To Reproduce**
My code works --> dados['semana'] = dados['Data'].dt.week
Error after change --> dados['semana'] = dados['Data'].dt.isocalendar().week
**Expected behavior**
No errors
**Additional context**
I have checked, error appear after s = setup(....)
Variables are typed, but variable changed is empty type
After type ENTER show error:
AttributeError: 'float' object has no attribute 'shape'
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
import pycaret
pycaret.__version__
-->
3.00
</details>
<!-- Thanks for contributing! -->
|
process
|
isocalendar week not works describe the bug futurewarning series dt weekofyear and series dt week have been deprecated please use series dt isocalendar week instead to reproduce my code works dados dados dt week error after change dados dados dt isocalendar week expected behavior no errors additional context i have checked error appear after s setup variables are typed but variable changed is empty type after type enter show error attributeerror float object has no attribute shape versions please run the following code snippet and paste the output here import pycaret pycaret version
| 1
|
19,401
| 25,542,834,390
|
IssuesEvent
|
2022-11-29 16:27:42
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process.spawn() losses exit code when using powershell
|
child_process doc
|
### Version
v18.12.1
### Platform
Microsoft Windows NT 10.0.22621.0 x64
### Subsystem
_No response_
### What steps will reproduce the bug?
```javascript
const cp = require('child_process');
const proc = cp.spawn('node -e "process.exit(2)"', { shell: 'powershell' });
proc.on('close', (code, signal) => { console.log('exit code:', code); });
```
### How often does it reproduce? Is there a required condition?
_No response_
### What is the expected behavior?
exit code: 2
### What do you see instead?
exit code: 1
### Additional information
I know this is caused by `powershell -c` behavior and is hard to fix.
If it won't be fixed, it should be mentioned in the doc.
|
1.0
|
child_process.spawn() losses exit code when using powershell - ### Version
v18.12.1
### Platform
Microsoft Windows NT 10.0.22621.0 x64
### Subsystem
_No response_
### What steps will reproduce the bug?
```javascript
const cp = require('child_process');
const proc = cp.spawn('node -e "process.exit(2)"', { shell: 'powershell' });
proc.on('close', (code, signal) => { console.log('exit code:', code); });
```
### How often does it reproduce? Is there a required condition?
_No response_
### What is the expected behavior?
exit code: 2
### What do you see instead?
exit code: 1
### Additional information
I know this is caused by `powershell -c` behavior and is hard to fix.
If it won't be fixed, it should be mentioned in the doc.
|
process
|
child process spawn losses exit code when using powershell version platform microsoft windows nt subsystem no response what steps will reproduce the bug javascript const cp require child process const proc cp spawn node e process exit shell powershell proc on close code signal console log exit code code how often does it reproduce is there a required condition no response what is the expected behavior exit code what do you see instead exit code additional information i know this is caused by powershell c behavior and is hard to fix if it won t be fixed it should be mentioned in the doc
| 1
|
289,968
| 8,880,980,849
|
IssuesEvent
|
2019-01-14 08:41:53
|
techforbetter/myPickle
|
https://api.github.com/repos/techforbetter/myPickle
|
opened
|
Unapproved profiles should be hidden from Find Support
|
bug priority-2
|
Currently you can still see unapproved profiles on the Find Support page
|
1.0
|
Unapproved profiles should be hidden from Find Support - Currently you can still see unapproved profiles on the Find Support page
|
non_process
|
unapproved profiles should be hidden from find support currently you can still see unapproved profiles on the find support page
| 0
|
327,471
| 28,065,189,549
|
IssuesEvent
|
2023-03-29 14:58:42
|
web-platform-tests/interop
|
https://api.github.com/repos/web-platform-tests/interop
|
closed
|
[css-color-4] Big number serialization problem
|
test-change-proposal agenda+
|
### Test List
wpt.fyi/css/css-color/parsing/color-computed-color-function.html
Line:
test_computed_value("color", "color(display-p3 184 1.00001 2347329746587)", "color(display-p3 184 1.00001 2347329700000)", "[Display P3 color with component > 1 should not clamp]");
### Rationale
Turned out there is an issue with CSS numbers serialization being completely different from browser to browser with none of them being correct spec compliant: https://drafts.csswg.org/cssom/#serializing-css-values.
So, for the sake of Interopt 2023 progress, I propose we move this test case to a separate file like this:
https://chromium-review.googlesource.com/c/chromium/src/+/4303721.
And solve the problem of serialization separately.
|
1.0
|
[css-color-4] Big number serialization problem - ### Test List
wpt.fyi/css/css-color/parsing/color-computed-color-function.html
Line:
test_computed_value("color", "color(display-p3 184 1.00001 2347329746587)", "color(display-p3 184 1.00001 2347329700000)", "[Display P3 color with component > 1 should not clamp]");
### Rationale
Turned out there is an issue with CSS numbers serialization being completely different from browser to browser with none of them being correct spec compliant: https://drafts.csswg.org/cssom/#serializing-css-values.
So, for the sake of Interopt 2023 progress, I propose we move this test case to a separate file like this:
https://chromium-review.googlesource.com/c/chromium/src/+/4303721.
And solve the problem of serialization separately.
|
non_process
|
big number serialization problem test list wpt fyi css css color parsing color computed color function html line test computed value color color display color display rationale turned out there is an issue with css numbers serialization being completely different from browser to browser with none of them being correct spec compliant so for the sake of interopt progress i propose we move this test case to a separate file like this and solve the problem of serialization separately
| 0
|
8,975
| 12,091,549,780
|
IssuesEvent
|
2020-04-19 12:08:12
|
prisma/prisma-client-js
|
https://api.github.com/repos/prisma/prisma-client-js
|
reopened
|
Passing empty array to connect leads to a NullConstraintVioliation
|
bug/2-confirmed kind/bug process/candidate
|
<!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
When you try to pass an empty array to connect, it throws a null constraint violation instead of doing not trying to connect anything
## How to reproduce
Steps to reproduce the behavior:
1. Use the following schema:
```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model User {
id String @id @default(cuid())
name String
tag Tag[] @relation(references:[id])
}
model Tag {
id String @id @default(cuid())
name String
users User[]@relation(references:[id])
}
```
2. Run the following code:
```js
const p = await prisma.user.create({
data: {
name: "something",
tag: {
connect: [],
},
},
});
```
## Expected behaviour
It should not throw the following error:

## Environment & setup
```
@prisma/cli : 2.0.0-beta.2
Current platform : darwin
Query Engine : query-engine 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/introspection-engine-darwin)
```
## Additional Information
Following query was sent to the engine:
```graphql
mutation {
createOneUser(data: {
name: "something"
tag: {
connect: []
}
}) {
id
name
}
}
```
|
1.0
|
Passing empty array to connect leads to a NullConstraintVioliation - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
When you try to pass an empty array to connect, it throws a null constraint violation instead of doing not trying to connect anything
## How to reproduce
Steps to reproduce the behavior:
1. Use the following schema:
```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model User {
id String @id @default(cuid())
name String
tag Tag[] @relation(references:[id])
}
model Tag {
id String @id @default(cuid())
name String
users User[]@relation(references:[id])
}
```
2. Run the following code:
```js
const p = await prisma.user.create({
data: {
name: "something",
tag: {
connect: [],
},
},
});
```
## Expected behaviour
It should not throw the following error:

## Environment & setup
```
@prisma/cli : 2.0.0-beta.2
Current platform : darwin
Query Engine : query-engine 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 76857c35ba1e1764dd5473656ecbbb2f739e1822 (at /Users/harshit/.nvm/versions/node/v12.15.0/lib/node_modules/@prisma/cli/introspection-engine-darwin)
```
## Additional Information
Following query was sent to the engine:
```graphql
mutation {
createOneUser(data: {
name: "something"
tag: {
connect: []
}
}) {
id
name
}
}
```
|
process
|
passing empty array to connect leads to a nullconstraintvioliation thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description when you try to pass an empty array to connect it throws a null constraint violation instead of doing not trying to connect anything how to reproduce steps to reproduce the behavior use the following schema prisma datasource db provider postgresql url env database url generator client provider prisma client js model user id string id default cuid name string tag tag relation references model tag id string id default cuid name string users user relation references run the following code js const p await prisma user create data name something tag connect expected behaviour it should not throw the following error environment setup prisma cli beta current platform darwin query engine query engine at users harshit nvm versions node lib node modules prisma cli query engine darwin migration engine migration engine cli at users harshit nvm versions node lib node modules prisma cli migration engine darwin introspection engine introspection core at users harshit nvm versions node lib node modules prisma cli introspection engine darwin additional information following query was sent to the engine graphql mutation createoneuser data name something tag connect id name
| 1
|
7,233
| 10,383,581,361
|
IssuesEvent
|
2019-09-10 09:57:33
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
closed
|
Peripheral drivers
|
Community: help wanted Process: API change State: WIP State: stale
|
General issue on periph drivers #4758
Remodeling of the periph/i2c.h interface and subsequent adaption/rewrite of all existing implementations
- Related issues
- [ ] #6577
Cleanup and unification of low-level timer interfaces (timer, rtt, rtc)
- Related issues
- [ ] #7332
Introduction of spi_slave interface
- Related issues
- [ ]
Introduction of i2c_slave interface
- Related issues
- [ ]
|
1.0
|
Peripheral drivers - General issue on periph drivers #4758
Remodeling of the periph/i2c.h interface and subsequent adaption/rewrite of all existing implementations
- Related issues
- [ ] #6577
Cleanup and unification of low-level timer interfaces (timer, rtt, rtc)
- Related issues
- [ ] #7332
Introduction of spi_slave interface
- Related issues
- [ ]
Introduction of i2c_slave interface
- Related issues
- [ ]
|
process
|
peripheral drivers general issue on periph drivers remodeling of the periph h interface and subsequent adaption rewrite of all existing implementations related issues cleanup and unification of low level timer interfaces timer rtt rtc related issues introduction of spi slave interface related issues introduction of slave interface related issues
| 1
|
815,554
| 30,561,414,102
|
IssuesEvent
|
2023-07-20 14:49:53
|
projectdiscovery/useragent
|
https://api.github.com/repos/projectdiscovery/useragent
|
closed
|
Add new source & category
|
Priority: Medium Status: Blocked Type: Enhancement Type: Question
|
# Description
```[tasklist]
- [ ] Add [whatismybrowser.com](https://developers.whatismybrowser.com/useragents/explore/) as source
- [ ] Scrape User Agents based on their categories
- [ ] Add relevant categories
- [ ] Add `-latest` flag to fetch latest User Agents.
- [ ] Add common/popular flags ( -chrome etc )
- [ ] Add using as library example / build test
```
UserAgent data on `whatismybrowser.com` is most recent & structured implementing this will prevent bad data
|
1.0
|
Add new source & category - # Description
```[tasklist]
- [ ] Add [whatismybrowser.com](https://developers.whatismybrowser.com/useragents/explore/) as source
- [ ] Scrape User Agents based on their categories
- [ ] Add relevant categories
- [ ] Add `-latest` flag to fetch latest User Agents.
- [ ] Add common/popular flags ( -chrome etc )
- [ ] Add using as library example / build test
```
UserAgent data on `whatismybrowser.com` is most recent & structured implementing this will prevent bad data
|
non_process
|
add new source category description add as source scrape user agents based on their categories add relevant categories add latest flag to fetch latest user agents add common popular flags chrome etc add using as library example build test useragent data on whatismybrowser com is most recent structured implementing this will prevent bad data
| 0
|
4,168
| 7,107,918,993
|
IssuesEvent
|
2018-01-16 21:45:53
|
18F/product-guide
|
https://api.github.com/repos/18F/product-guide
|
closed
|
UPDATE SECTION (Budgeting) - Prepping project budgets
|
process change question
|
add additional tools and tip for how to prepare a project budget. Some trends that might be helpful to add to the PM guide:
1. Budgets should be done assuming the GS-15 rate of $205/hour
2. Budgets can be done with ranges (low end and high end), with knowledge that the final scope will be determined as the project rolls along.
3. Budgets should be done with a goal of making sure that the partner doesn't run out of money AND that not too much is left over if towards the end of the fiscal year.
**Question**
Does "full time" = 80% each week (32 hours), 100% (40 hours), or something else?
Does 1/2 time = 40% or 50%?
[slack convo for background](https://18f.slack.com/archives/product/p1454345050000290)
|
1.0
|
UPDATE SECTION (Budgeting) - Prepping project budgets - add additional tools and tip for how to prepare a project budget. Some trends that might be helpful to add to the PM guide:
1. Budgets should be done assuming the GS-15 rate of $205/hour
2. Budgets can be done with ranges (low end and high end), with knowledge that the final scope will be determined as the project rolls along.
3. Budgets should be done with a goal of making sure that the partner doesn't run out of money AND that not too much is left over if towards the end of the fiscal year.
**Question**
Does "full time" = 80% each week (32 hours), 100% (40 hours), or something else?
Does 1/2 time = 40% or 50%?
[slack convo for background](https://18f.slack.com/archives/product/p1454345050000290)
|
process
|
update section budgeting prepping project budgets add additional tools and tip for how to prepare a project budget some trends that might be helpful to add to the pm guide budgets should be done assuming the gs rate of hour budgets can be done with ranges low end and high end with knowledge that the final scope will be determined as the project rolls along budgets should be done with a goal of making sure that the partner doesn t run out of money and that not too much is left over if towards the end of the fiscal year question does full time each week hours hours or something else does time or
| 1
|
8,447
| 11,614,854,086
|
IssuesEvent
|
2020-02-26 13:18:44
|
eugene-sukhodolskiy/harakter-html
|
https://api.github.com/repos/eugene-sukhodolskiy/harakter-html
|
closed
|
Make base theme elements
|
dev html/css inprocess
|
- [x] Make headings
- [x] Make block sub heading
- [x] Make base buttons
- [x] Make base forms
|
1.0
|
Make base theme elements - - [x] Make headings
- [x] Make block sub heading
- [x] Make base buttons
- [x] Make base forms
|
process
|
make base theme elements make headings make block sub heading make base buttons make base forms
| 1
|
362,555
| 25,381,216,625
|
IssuesEvent
|
2022-11-21 17:42:02
|
scylladb/scylladb
|
https://api.github.com/repos/scylladb/scylladb
|
closed
|
Docs: add troubleshooting doc for missing .mount issue
|
Documentation
|
@syuu1228 opened a valuable PR in the deprecated _scylla-docs_ repository: https://github.com/scylladb/scylla-docs/pull/4152
- Add the content to the ScyllaDB docs in this repo.
- Fix the syntax to ensure the documentation renders correctly.
|
1.0
|
Docs: add troubleshooting doc for missing .mount issue - @syuu1228 opened a valuable PR in the deprecated _scylla-docs_ repository: https://github.com/scylladb/scylla-docs/pull/4152
- Add the content to the ScyllaDB docs in this repo.
- Fix the syntax to ensure the documentation renders correctly.
|
non_process
|
docs add troubleshooting doc for missing mount issue opened a valuable pr in the deprecated scylla docs repository add the content to the scylladb docs in this repo fix the syntax to ensure the documentation renders correctly
| 0
|
14,866
| 18,275,427,651
|
IssuesEvent
|
2021-10-04 18:13:44
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
opened
|
Add a regular asynchronous team check-in
|
type: enhancement :label: team-process impact: medium
|
### Description
Now that we've merged to two-week sprint cycles as part of https://github.com/2i2c-org/team-compass/issues/182, it might be helpful for us to have a lightweight way for us to check-in with one another throughout the week. In a team meeting, there was interest in trying out an asynchronous team check-in process. It would have the following goals:
- Be an opportunity to ask for feedback or input
- Help team members provide accountability and transparency to one another
- Identify opportunities to coordinate or hand-off work
- Identify whether there are unexpected challenges we didn't anticipate when scoping work
- Provide ourselves an easy way to look back and see what we've been up to this week/month/etc
- Help us focus on just one or two things at a time, and not spread ourselves across too-many daily projects
In addition, it'd have these constraints:
- Must be very lightweight - responding shouldn't require any "thought" and should only take 30-60 seconds
- Should be asynchronous, and not require people to respond at awkward times
- Should be followed by all team members (and should be light-weight enough that it's not a chore to maintain the practice)
### Value / benefit
Having a practice like this will make it easier for us to signal to one another what we're up to throughout the week, since we have less time to speak with one another face-to-face in our planning. Having a lightweight check-in process will help us accomplish the goals in the description above.
### Implementation details
In our meeting we had discussed something like a check-in every couple of days, but after thinking about it a bit, I think it might be the easiest if we just try to adopt something that is **daily** on weekdays. I think having it each day will make it more likely to just become a habit, and will also force us to keep the process lightweight enough that it does not feel like a chore to carry it out.
So I propose that we do the following:
- For the next 2 months, we try the following process and revisit in December 2021.
- Use the [Geekbot](https://geekbot.com/) for a daily team report.
- This will send a DM to you at **5pm in your time zone** with a few questions.
- The questions will be:
- How are you feeling today? ✨
- What did you work on today? 💪
- What are you up to next? 🏃
- Would you like help with anything in particular? 🙏
- When you answer them via the DM, they'll get posted to the `#team-updates` channel
That's it!
In 2 months, I think that we should re-discuss this and answer the following questions:
- Does this practice accomplish the goals listed above?
- Is a daily stand-up too frequent, or should we scale it back a bit (e.g., MWF)?
- Is this a replacement for our weekly team sync, or should they both live in harmony?
### Tasks to complete
- [ ] Iterate on the plan above if others have feedback
- [ ] No objections to the plan above
- [ ] Try out this practice for 2 months
- [ ] Discuss in December 2021
- [ ] Decide what to do next
### Updates
_No response_
|
1.0
|
Add a regular asynchronous team check-in - ### Description
Now that we've merged to two-week sprint cycles as part of https://github.com/2i2c-org/team-compass/issues/182, it might be helpful for us to have a lightweight way for us to check-in with one another throughout the week. In a team meeting, there was interest in trying out an asynchronous team check-in process. It would have the following goals:
- Be an opportunity to ask for feedback or input
- Help team members provide accountability and transparency to one another
- Identify opportunities to coordinate or hand-off work
- Identify whether there are unexpected challenges we didn't anticipate when scoping work
- Provide ourselves an easy way to look back and see what we've been up to this week/month/etc
- Help us focus on just one or two things at a time, and not spread ourselves across too-many daily projects
In addition, it'd have these constraints:
- Must be very lightweight - responding shouldn't require any "thought" and should only take 30-60 seconds
- Should be asynchronous, and not require people to respond at awkward times
- Should be followed by all team members (and should be light-weight enough that it's not a chore to maintain the practice)
### Value / benefit
Having a practice like this will make it easier for us to signal to one another what we're up to throughout the week, since we have less time to speak with one another face-to-face in our planning. Having a lightweight check-in process will help us accomplish the goals in the description above.
### Implementation details
In our meeting we had discussed something like a check-in every couple of days, but after thinking about it a bit, I think it might be the easiest if we just try to adopt something that is **daily** on weekdays. I think having it each day will make it more likely to just become a habit, and will also force us to keep the process lightweight enough that it does not feel like a chore to carry it out.
So I propose that we do the following:
- For the next 2 months, we try the following process and revisit in December 2021.
- Use the [Geekbot](https://geekbot.com/) for a daily team report.
- This will send a DM to you at **5pm in your time zone** with a few questions.
- The questions will be:
- How are you feeling today? ✨
- What did you work on today? 💪
- What are you up to next? 🏃
- Would you like help with anything in particular? 🙏
- When you answer them via the DM, they'll get posted to the `#team-updates` channel
That's it!
In 2 months, I think that we should re-discuss this and answer the following questions:
- Does this practice accomplish the goals listed above?
- Is a daily stand-up too frequent, or should we scale it back a bit (e.g., MWF)?
- Is this a replacement for our weekly team sync, or should they both live in harmony?
### Tasks to complete
- [ ] Iterate on the plan above if others have feedback
- [ ] No objections to the plan above
- [ ] Try out this practice for 2 months
- [ ] Discuss in December 2021
- [ ] Decide what to do next
### Updates
_No response_
|
process
|
add a regular asynchronous team check in description now that we ve merged to two week sprint cycles as part of it might be helpful for us to have a lightweight way for us to check in with one another throughout the week in a team meeting there was interest in trying out an asynchronous team check in process it would have the following goals be an opportunity to ask for feedback or input help team members provide accountability and transparency to one another identify opportunities to coordinate or hand off work identify whether there are unexpected challenges we didn t anticipate when scoping work provide ourselves an easy way to look back and see what we ve been up to this week month etc help us focus on just one or two things at a time and not spread ourselves across too many daily projects in addition it d have these constraints must be very lightweight responding shouldn t require any thought and should only take seconds should be asynchronous and not require people to respond at awkward times should be followed by all team members and should be light weight enough that it s not a chore to maintain the practice value benefit having a practice like this will make it easier for us to signal to one another what we re up to throughout the week since we have less time to speak with one another face to face in our planning having a lightweight check in process will help us accomplish the goals in the description above implementation details in our meeting we had discussed something like a check in every couple of days but after thinking about it a bit i think it might be the easiest if we just try to adopt something that is daily on weekdays i think having it each day will make it more likely to just become a habit and will also force us to keep the process lightweight enough that it does not feel like a chore to carry it out so i propose that we do the following for the next months we try the following process and revisit in december use the for a daily team report this will send a dm to you at in your time zone with a few questions the questions will be how are you feeling today ✨ what did you work on today 💪 what are you up to next 🏃 would you like help with anything in particular 🙏 when you answer them via the dm they ll get posted to the team updates channel that s it in months i think that we should re discuss this and answer the following questions does this practice accomplish the goals listed above is a daily stand up too frequent or should we scale it back a bit e g mwf is this a replacement for our weekly team sync or should they both live in harmony tasks to complete iterate on the plan above if others have feedback no objections to the plan above try out this practice for months discuss in december decide what to do next updates no response
| 1
|
5,277
| 8,066,704,579
|
IssuesEvent
|
2018-08-04 19:07:07
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Writing dates manually doesn't work in New Process Step form
|
space: processes type: bug
|
<!--
1. Please check if an issue already exists so there are no duplicates
2. Fill out the whole template so we have a good overview on the issue
3. Do not remove any section of the template. If something is not applicable leave it empty but leave it in the Issue
4. Please follow the template, otherwise we'll have to ask you to update it
-->
# This is a (Bug Report / Feature Proposal)
#### :tophat: Description
For bug reports:
* What went wrong?
Typing dates manually instead of picking them with the datepicker will usually result in them not being saved. As a side effect, it will also bypass any error message.

* What did you expect should have happened?
If I can type in this field and I use the exact same formatting as the one used by the datepicker, I don't understand why they wouldn't be saved.
* What was the config you used?
* What stacktrace or error message from your provider did you see?
For feature proposals:
* What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us.
* If there is additional config how would it look
#### :pushpin: Related issues
* #12345
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***:
* ***Browser & version***:
Chrome latest
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
localhost:3000/admin/participatory_processes/new
|
1.0
|
Writing dates manually doesn't work in New Process Step form - <!--
1. Please check if an issue already exists so there are no duplicates
2. Fill out the whole template so we have a good overview on the issue
3. Do not remove any section of the template. If something is not applicable leave it empty but leave it in the Issue
4. Please follow the template, otherwise we'll have to ask you to update it
-->
# This is a (Bug Report / Feature Proposal)
#### :tophat: Description
For bug reports:
* What went wrong?
Typing dates manually instead of picking them with the datepicker will usually result in them not being saved. As a side effect, it will also bypass any error message.

* What did you expect should have happened?
If I can type in this field and I use the exact same formatting as the one used by the datepicker, I don't understand why they wouldn't be saved.
* What was the config you used?
* What stacktrace or error message from your provider did you see?
For feature proposals:
* What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us.
* If there is additional config how would it look
#### :pushpin: Related issues
* #12345
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***:
* ***Browser & version***:
Chrome latest
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
localhost:3000/admin/participatory_processes/new
|
process
|
writing dates manually doesn t work in new process step form please check if an issue already exists so there are no duplicates fill out the whole template so we have a good overview on the issue do not remove any section of the template if something is not applicable leave it empty but leave it in the issue please follow the template otherwise we ll have to ask you to update it this is a bug report feature proposal tophat description for bug reports what went wrong typing dates manually instead of picking them with the datepicker will usually result in them not being saved as a side effect it will also bypass any error message what did you expect should have happened if i can type in this field and i use the exact same formatting as the one used by the datepicker i don t understand why they wouldn t be saved what was the config you used what stacktrace or error message from your provider did you see for feature proposals what is the use case that should be solved the more detail you describe this in the easier it is to understand for us if there is additional config how would it look pushpin related issues clipboard additional data decidim deployment where you found the issue browser version chrome latest screenshot error messages url to reproduce the error localhost admin participatory processes new
| 1
|
319,515
| 9,745,306,330
|
IssuesEvent
|
2019-06-03 09:18:45
|
alphagov/govuk-frontend
|
https://api.github.com/repos/alphagov/govuk-frontend
|
closed
|
html validation errors
|
Effort: hours Priority: low
|
In HMTL5 validator we get the following errors:
```
Error: Attribute src not allowed on element image at this point.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
Error: Element image is missing required attribute height.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
Error: Element image is missing required attribute width.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
```
|
1.0
|
html validation errors - In HMTL5 validator we get the following errors:
```
Error: Attribute src not allowed on element image at this point.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
Error: Element image is missing required attribute height.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
Error: Element image is missing required attribute width.
From line 73, column 13; to line 73, column 117
<image src="/assets/images/govuk-logotype-crown.png" class="govuk-header__logotype-crown-fallback-image"></imag
```
|
non_process
|
html validation errors in validator we get the following errors error attribute src not allowed on element image at this point from line column to line column imag error element image is missing required attribute height from line column to line column imag error element image is missing required attribute width from line column to line column imag
| 0
|
76,357
| 9,424,735,500
|
IssuesEvent
|
2019-04-11 14:40:51
|
infor-design/design-system
|
https://api.github.com/repos/infor-design/design-system
|
closed
|
Secondary button disabled color matches active color
|
for: design type: bug :bug:
|
The disabled color should be a different color than the active.
https://github.com/infor-design/design-system/blob/master/design-tokens/props/button.json#L43-L52
|
1.0
|
Secondary button disabled color matches active color - The disabled color should be a different color than the active.
https://github.com/infor-design/design-system/blob/master/design-tokens/props/button.json#L43-L52
|
non_process
|
secondary button disabled color matches active color the disabled color should be a different color than the active
| 0
|
3,399
| 6,518,590,806
|
IssuesEvent
|
2017-08-28 08:47:41
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
opened
|
Reference Resolver not resolving library-qualified identifiers correctly
|
bug parse-tree-processing
|
Omitting the `Global` qualifier leads to mixed results:
```vb
Sub test()
'Host is Excel, Word is referenced
Debug.Print Application.Name 'Application correctly resolves to Excel.Global.Application (property get accessor:Application)
Debug.Print Excel.Application.Name 'Application INCORRECTLY resolves to Excel.IXmlSchema.Application (property get accessor:Object)
Debug.Print Word.Application.Name 'Application INCORRECTLY resolves to Word.Email.Application (property get accessor:Application)
Debug.Print Excel.Global.Application.Name 'Application correctly resolves to Excel.Global.Application (property get accessor:Application)
Debug.Print Word.Global.Application.Name 'Application correctly resolves to Word.Global.Application (property get accessor:Application)
End Sub
```
In Outlook, the results are different again. The global property `Application` should resolve as a Property Get that returns an Application. Unlike Excel/Word, Outlook doesn't have a `Global` class, but members of the `Application` class are elevated as global members. In this case `Application.Application` is the global member.

```vb
Debug.Print Application.Name 'Application INCORRECTLY resolves to "Outlook.Application (Class)"
Debug.Print Outlook.Application.Name 'Application INCORRECTLY resolves to "Outlook.Application (Class)"
Debug.Print Application.Application.Name 'The 2nd Application almost correctly resolves to "Outlook.Application.Application (property get accessor:_Application)" - The return type should be `Application` not `_Application`?
```
|
1.0
|
Reference Resolver not resolving library-qualified identifiers correctly - Omitting the `Global` qualifier leads to mixed results:
```vb
Sub test()
'Host is Excel, Word is referenced
Debug.Print Application.Name 'Application correctly resolves to Excel.Global.Application (property get accessor:Application)
Debug.Print Excel.Application.Name 'Application INCORRECTLY resolves to Excel.IXmlSchema.Application (property get accessor:Object)
Debug.Print Word.Application.Name 'Application INCORRECTLY resolves to Word.Email.Application (property get accessor:Application)
Debug.Print Excel.Global.Application.Name 'Application correctly resolves to Excel.Global.Application (property get accessor:Application)
Debug.Print Word.Global.Application.Name 'Application correctly resolves to Word.Global.Application (property get accessor:Application)
End Sub
```
In Outlook, the results are different again. The global property `Application` should resolve as a Property Get that returns an Application. Unlike Excel/Word, Outlook doesn't have a `Global` class, but members of the `Application` class are elevated as global members. In this case `Application.Application` is the global member.

```vb
Debug.Print Application.Name 'Application INCORRECTLY resolves to "Outlook.Application (Class)"
Debug.Print Outlook.Application.Name 'Application INCORRECTLY resolves to "Outlook.Application (Class)"
Debug.Print Application.Application.Name 'The 2nd Application almost correctly resolves to "Outlook.Application.Application (property get accessor:_Application)" - The return type should be `Application` not `_Application`?
```
|
process
|
reference resolver not resolving library qualified identifiers correctly omitting the global qualifier leads to mixed results vb sub test host is excel word is referenced debug print application name application correctly resolves to excel global application property get accessor application debug print excel application name application incorrectly resolves to excel ixmlschema application property get accessor object debug print word application name application incorrectly resolves to word email application property get accessor application debug print excel global application name application correctly resolves to excel global application property get accessor application debug print word global application name application correctly resolves to word global application property get accessor application end sub in outlook the results are different again the global property application should resolve as a property get that returns an application unlike excel word outlook doesn t have a global class but members of the application class are elevated as global members in this case application application is the global member vb debug print application name application incorrectly resolves to outlook application class debug print outlook application name application incorrectly resolves to outlook application class debug print application application name the application almost correctly resolves to outlook application application property get accessor application the return type should be application not application
| 1
|
181,166
| 6,656,855,103
|
IssuesEvent
|
2017-09-29 22:44:45
|
ArkEcosystem/ark-desktop
|
https://api.github.com/repos/ArkEcosystem/ark-desktop
|
closed
|
Localisation: Import Account Dialog
|
Priority: Medium Type: Bug
|
Upon clicking on "Import Account" while a language other than English is chosen the dialog for passphrase entry contains English text instead of a translation, the same goes for "QR CODE" link, both emphasized in red in the attached screencap.

|
1.0
|
Localisation: Import Account Dialog - Upon clicking on "Import Account" while a language other than English is chosen the dialog for passphrase entry contains English text instead of a translation, the same goes for "QR CODE" link, both emphasized in red in the attached screencap.

|
non_process
|
localisation import account dialog upon clicking on import account while a language other than english is chosen the dialog for passphrase entry contains english text instead of a translation the same goes for qr code link both emphasized in red in the attached screencap
| 0
|
14,450
| 17,532,590,266
|
IssuesEvent
|
2021-08-12 00:37:12
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Black image on B&W converted images after 3.6.0 update
|
bug: invalid scope: image processing
|
Images that was converted to B&W in color calibration module on version 3.4 using Ilford HP5+ preset got black in the darkroom view when opened in version 3.6.
Thumbnail keeps visible until I went back to Ligtable view, then got black too.
The problem its solved re-applying the HP5+ preset.
Exported JPGs are also a black rectangle.
Darktable version 3.6.0 running on Windows 10
Follows an screenshot:

|
1.0
|
Black image on B&W converted images after 3.6.0 update - Images that was converted to B&W in color calibration module on version 3.4 using Ilford HP5+ preset got black in the darkroom view when opened in version 3.6.
Thumbnail keeps visible until I went back to Ligtable view, then got black too.
The problem its solved re-applying the HP5+ preset.
Exported JPGs are also a black rectangle.
Darktable version 3.6.0 running on Windows 10
Follows an screenshot:

|
process
|
black image on b w converted images after update images that was converted to b w in color calibration module on version using ilford preset got black in the darkroom view when opened in version thumbnail keeps visible until i went back to ligtable view then got black too the problem its solved re applying the preset exported jpgs are also a black rectangle darktable version running on windows follows an screenshot
| 1
|
11,321
| 14,138,978,490
|
IssuesEvent
|
2020-11-10 09:12:57
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Cleanup old non-$ methods
|
kind/improvement process/candidate team/typescript
|
We still have a few methods on Prisma Client, like `.transaction` from the pre-$ days.
We deprecated these methods already a few sprints ago, so it's time to clean them up.
|
1.0
|
Cleanup old non-$ methods - We still have a few methods on Prisma Client, like `.transaction` from the pre-$ days.
We deprecated these methods already a few sprints ago, so it's time to clean them up.
|
process
|
cleanup old non methods we still have a few methods on prisma client like transaction from the pre days we deprecated these methods already a few sprints ago so it s time to clean them up
| 1
|
55,440
| 30,752,773,862
|
IssuesEvent
|
2023-07-28 21:04:20
|
apache/echarts
|
https://api.github.com/repos/apache/echarts
|
closed
|
X-range plot very slow and jam with more than 10,000 data
|
en performance stale topic: custom
|
### Version
5.1.2
### Steps to reproduce
https://echarts.apache.org/examples/zh/editor.html?c=custom-profile
I am developing a application like the example above, but my data amount is very large. I read data from json, there are more than 10,000
even 100,000 items. It's very slow when rendering the bars, and if I enable zoom in, the page will crash when zoom in.
You can just test the above example, set the dataCount to 10,000 , it's really slow and even jam.
### What is expected?
It shouldn't be so slow.
### What is actually happening?
It's very slow.
<!-- This issue is generated by echarts-issue-helper. DO NOT REMOVE -->
<!-- This issue is in English. DO NOT REMOVE -->
|
True
|
X-range plot very slow and jam with more than 10,000 data - ### Version
5.1.2
### Steps to reproduce
https://echarts.apache.org/examples/zh/editor.html?c=custom-profile
I am developing a application like the example above, but my data amount is very large. I read data from json, there are more than 10,000
even 100,000 items. It's very slow when rendering the bars, and if I enable zoom in, the page will crash when zoom in.
You can just test the above example, set the dataCount to 10,000 , it's really slow and even jam.
### What is expected?
It shouldn't be so slow.
### What is actually happening?
It's very slow.
<!-- This issue is generated by echarts-issue-helper. DO NOT REMOVE -->
<!-- This issue is in English. DO NOT REMOVE -->
|
non_process
|
x range plot very slow and jam with more than data version steps to reproduce i am developing a application like the example above but my data amount is very large i read data from json there are more than even items it s very slow when rendering the bars and if i enable zoom in the page will crash when zoom in you can just test the above example set the datacount to it s really slow and even jam what is expected it shouldn t be so slow what is actually happening it s very slow
| 0
|
11,390
| 14,225,395,705
|
IssuesEvent
|
2020-11-17 21:10:51
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
QGIS 3.16 crashes when using Cell statistics tool "Ignore NoData"
|
Bug Processing
|
Hi,
I’m trying to use the new tool in QGIS 3.16 “Cell statistics”. However, whenever I choose to tick the box “ignore NoData values” and run the tool, QGIS crashes. I have tried it with different multiband rasters in different formats (NC, tiff, GeoTIFF) but whenever I choose to ignore NoData values the program crashes. I have tried to run the tool on a clean working space and with my usual project, but nothing seems to work.
To reproduce this bug:
- Have a multiband raster (my current projection is in EPSG 4326: WGS 84)
- Open Cell statistics
- Input and reference layer the same raster
- Statistics: Mean
- Check the box “Ignore NoData values”
- Run the tool
→ QGIS Crashes
QGIS version
QGIS version | 3.16.0-Hannover | QGIS code revision | 43b64b13f3
-- | -- | -- | --
Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2
Compiled against GDAL/OGR | 3.1.4 | Running against GDAL/OGR | 3.1.4
Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3
Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0
PostgreSQL Client Version | 11.5 | SpatiaLite Version | 4.3.0
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8
Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020
OS Version | Windows 10 (10.0)
Active python plugins | LocatePoints; mmqgis; NNJoin; PointConnector; quick_map_services; rastertimeseriesmanager; refFunctions; db_manager; MetaSearch; processing
|
1.0
|
QGIS 3.16 crashes when using Cell statistics tool "Ignore NoData" - Hi,
I’m trying to use the new tool in QGIS 3.16 “Cell statistics”. However, whenever I choose to tick the box “ignore NoData values” and run the tool, QGIS crashes. I have tried it with different multiband rasters in different formats (NC, tiff, GeoTIFF) but whenever I choose to ignore NoData values the program crashes. I have tried to run the tool on a clean working space and with my usual project, but nothing seems to work.
To reproduce this bug:
- Have a multiband raster (my current projection is in EPSG 4326: WGS 84)
- Open Cell statistics
- Input and reference layer the same raster
- Statistics: Mean
- Check the box “Ignore NoData values”
- Run the tool
→ QGIS Crashes
QGIS version
QGIS version | 3.16.0-Hannover | QGIS code revision | 43b64b13f3
-- | -- | -- | --
Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2
Compiled against GDAL/OGR | 3.1.4 | Running against GDAL/OGR | 3.1.4
Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3
Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0
PostgreSQL Client Version | 11.5 | SpatiaLite Version | 4.3.0
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8
Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020
OS Version | Windows 10 (10.0)
Active python plugins | LocatePoints; mmqgis; NNJoin; PointConnector; quick_map_services; rastertimeseriesmanager; refFunctions; db_manager; MetaSearch; processing
|
process
|
qgis crashes when using cell statistics tool ignore nodata hi i’m trying to use the new tool in qgis “cell statistics” however whenever i choose to tick the box “ignore nodata values” and run the tool qgis crashes i have tried it with different multiband rasters in different formats nc tiff geotiff but whenever i choose to ignore nodata values the program crashes i have tried to run the tool on a clean working space and with my usual project but nothing seems to work to reproduce this bug have a multiband raster my current projection is in epsg wgs open cell statistics input and reference layer the same raster statistics mean check the box “ignore nodata values” run the tool → qgis crashes qgis version qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins locatepoints mmqgis nnjoin pointconnector quick map services rastertimeseriesmanager reffunctions db manager metasearch processing
| 1
|
3,156
| 6,206,007,791
|
IssuesEvent
|
2017-07-06 17:26:05
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
test: define and test validity of core-dumps
|
post-mortem process test
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: *
* **Subsystem**: test,process
<!-- Enter your issue details below this comment. -->
The ad-hoc situation is that `node` has incorporated `V8`'s `--abort-on-uncaught-exception` as a core feature (Ref: https://github.com/nodejs/node/pull/14013).
There seems to be a consensus forming to make this an explicit design decision and document it (Ref: https://github.com/nodejs/node/pull/13931).
Currently the test suite only tests for process exit state (i.e. `code` and `signal`) but does not do _any_ testing on the validity of generated code-dump.
I suggest we define the minimal requirements needed from the core-dump to be valid and find a way to automatically assert those.
/cc @nodejs/post-mortem @nodejs/testing @nodejs/release @nodejs/build
|
1.0
|
test: define and test validity of core-dumps - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: *
* **Subsystem**: test,process
<!-- Enter your issue details below this comment. -->
The ad-hoc situation is that `node` has incorporated `V8`'s `--abort-on-uncaught-exception` as a core feature (Ref: https://github.com/nodejs/node/pull/14013).
There seems to be a consensus forming to make this an explicit design decision and document it (Ref: https://github.com/nodejs/node/pull/13931).
Currently the test suite only tests for process exit state (i.e. `code` and `signal`) but does not do _any_ testing on the validity of generated code-dump.
I suggest we define the minimal requirements needed from the core-dump to be valid and find a way to automatically assert those.
/cc @nodejs/post-mortem @nodejs/testing @nodejs/release @nodejs/build
|
process
|
test define and test validity of core dumps thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform subsystem test process the ad hoc situation is that node has incorporated s abort on uncaught exception as a core feature ref there seems to be a consensus forming to make this an explicit design decision and document it ref currently the test suite only tests for process exit state i e code and signal but does not do any testing on the validity of generated code dump i suggest we define the minimal requirements needed from the core dump to be valid and find a way to automatically assert those cc nodejs post mortem nodejs testing nodejs release nodejs build
| 1
|
20,318
| 26,960,492,771
|
IssuesEvent
|
2023-02-08 17:47:52
|
googleapis/java-iam
|
https://api.github.com/repos/googleapis/java-iam
|
reopened
|
Dependency Dashboard
|
type: process api: iam priority: p4
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.4.2](../pull/595)
- [ ] <!-- recreate-branch=renovate/com.google.cloud-google-iam-policy-parent-1.x -->[chore(deps): update dependency com.google.cloud:google-iam-policy-parent to v1.8.0](../pull/602)
- [ ] <!-- recreate-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v3.2.0](../pull/599)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/auto-release.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>google-iam-policy/pom.xml</summary>
- `com.google.cloud:google-iam-policy-parent 1.7.1-SNAPSHOT`
- `junit:junit 4.13.2`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-shared-dependencies 3.1.1`
- `junit:junit 4.13.2`
- `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.1`
- `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.4.2](../pull/595)
- [ ] <!-- recreate-branch=renovate/com.google.cloud-google-iam-policy-parent-1.x -->[chore(deps): update dependency com.google.cloud:google-iam-policy-parent to v1.8.0](../pull/602)
- [ ] <!-- recreate-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v3.2.0](../pull/599)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/auto-release.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>google-iam-policy/pom.xml</summary>
- `com.google.cloud:google-iam-policy-parent 1.7.1-SNAPSHOT`
- `junit:junit 4.13.2`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-shared-dependencies 3.1.1`
- `junit:junit 4.13.2`
- `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.1`
- `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more repository problems these problems occurred while renovating this repository warn getcachefolder appending missing trailing slash to pathname ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull pull detected dependencies github actions github workflows approve readme yaml actions github script github workflows auto release yaml actions github script github workflows ci yaml actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java maven google iam policy pom xml com google cloud google iam policy parent snapshot junit junit pom xml com google cloud google cloud shared config com google cloud google cloud shared dependencies junit junit org apache maven plugins maven project info reports plugin org apache maven plugins maven javadoc plugin check this box to trigger a request for renovate to run again on this repository
| 1
|
5,024
| 7,845,806,884
|
IssuesEvent
|
2018-06-19 13:56:41
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
[Develop] HPRM configs can no longer be generated
|
process_wontfix type_bug
|
```
gather_fragment_cache.js:103 Uncaught TypeError: Cannot read property 'fragment_cache' of undefined
at gather_fragment_cache.js:103
at Function.Pc (knockout-3.4.0.js:51)
at Function.Qc (knockout-3.4.0.js:51)
at Function.aa (knockout-3.4.0.js:50)
at Function.sa (knockout-3.4.0.js:52)
at Function.X (knockout-3.4.0.js:36)
at Function.uc (knockout-3.4.0.js:50)
at Object.Y (knockout-3.4.0.js:8)
at Object.oc (knockout-3.4.0.js:39)
```
|
1.0
|
[Develop] HPRM configs can no longer be generated - ```
gather_fragment_cache.js:103 Uncaught TypeError: Cannot read property 'fragment_cache' of undefined
at gather_fragment_cache.js:103
at Function.Pc (knockout-3.4.0.js:51)
at Function.Qc (knockout-3.4.0.js:51)
at Function.aa (knockout-3.4.0.js:50)
at Function.sa (knockout-3.4.0.js:52)
at Function.X (knockout-3.4.0.js:36)
at Function.uc (knockout-3.4.0.js:50)
at Object.Y (knockout-3.4.0.js:8)
at Object.oc (knockout-3.4.0.js:39)
```
|
process
|
hprm configs can no longer be generated gather fragment cache js uncaught typeerror cannot read property fragment cache of undefined at gather fragment cache js at function pc knockout js at function qc knockout js at function aa knockout js at function sa knockout js at function x knockout js at function uc knockout js at object y knockout js at object oc knockout js
| 1
|
9,940
| 30,791,216,033
|
IssuesEvent
|
2023-07-31 16:18:00
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Update examples to not use MSOnline module
|
automation/svc triaged assigned-to-author doc-enhancement Pri2
|
The Azure Automation example for O365 should be updated to use AzureAD module or GraphAPI instead of the MSOnline service since customers are encouraged to not use it and it doesn't support Certificate Authentication.
https://docs.microsoft.com/en-us/powershell/module/msonline/
> Note: this is the older MSOnline V1 PowerShell module for Azure Active Directory. Customers are encouraged to use the newer Azure Active Directory V2 PowerShell module instead of this module.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: df8fdc8c-e174-0a4c-44f7-0de94bc31ed4
* Version Independent ID: e1bd9924-b106-950b-4408-c62cfe4fd784
* Content: [Manage Office 365 services using Azure Automation](https://docs.microsoft.com/en-us/azure/automation/manage-office-365)
* Content Source: [articles/automation/manage-office-365.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-office-365.md)
* Service: **automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Update examples to not use MSOnline module - The Azure Automation example for O365 should be updated to use AzureAD module or GraphAPI instead of the MSOnline service since customers are encouraged to not use it and it doesn't support Certificate Authentication.
https://docs.microsoft.com/en-us/powershell/module/msonline/
> Note: this is the older MSOnline V1 PowerShell module for Azure Active Directory. Customers are encouraged to use the newer Azure Active Directory V2 PowerShell module instead of this module.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: df8fdc8c-e174-0a4c-44f7-0de94bc31ed4
* Version Independent ID: e1bd9924-b106-950b-4408-c62cfe4fd784
* Content: [Manage Office 365 services using Azure Automation](https://docs.microsoft.com/en-us/azure/automation/manage-office-365)
* Content Source: [articles/automation/manage-office-365.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-office-365.md)
* Service: **automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
non_process
|
update examples to not use msonline module the azure automation example for should be updated to use azuread module or graphapi instead of the msonline service since customers are encouraged to not use it and it doesn t support certificate authentication note this is the older msonline powershell module for azure active directory customers are encouraged to use the newer azure active directory powershell module instead of this module document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login mgoedtel microsoft alias magoedte
| 0
|
183,598
| 14,946,892,721
|
IssuesEvent
|
2021-01-26 07:43:03
|
dapr/components-contrib
|
https://api.github.com/repos/dapr/components-contrib
|
closed
|
Update the Secrets Component documentation
|
area/runtime/secret documentation good first issue kind/bug
|
1) Get Secrets Interface documentation needs updating. This PR https://github.com/dapr/components-contrib/issues/309 needs to be applied to this document
https://github.com/dapr/components-contrib/tree/master/secretstores.
type SecretStore interface {
// Init authenticates with the actual secret store and performs other init operation
Init(metadata Metadata) error
// GetSecret retrieves a secret using a key and returns a map of decrypted string/string values
GetSecret(req GetSecretRequest) (GetSecretResponse, error)
// BulkGetSecrets retrieves all secrets in the store and returns a map of decrypted string/string values
BulkGetSecret(req BulkGetSecretRequest) (GetSecretResponse, error)
2) The list of supported secrets stores is incomplete and needs updating. Need to add the local stores to this list. Would also be good to provide links to the documentation for each of these stores, such as the docs for AWS Secret Manager, in this table. See below
Currently supported secret stores are:
Kubernetes
Hashicorp Vault
Azure KeyVault
AWS Secret manager
GCP Cloud KMS
GCP Secret Manager
|
1.0
|
Update the Secrets Component documentation - 1) Get Secrets Interface documentation needs updating. This PR https://github.com/dapr/components-contrib/issues/309 needs to be applied to this document
https://github.com/dapr/components-contrib/tree/master/secretstores.
type SecretStore interface {
// Init authenticates with the actual secret store and performs other init operation
Init(metadata Metadata) error
// GetSecret retrieves a secret using a key and returns a map of decrypted string/string values
GetSecret(req GetSecretRequest) (GetSecretResponse, error)
// BulkGetSecrets retrieves all secrets in the store and returns a map of decrypted string/string values
BulkGetSecret(req BulkGetSecretRequest) (GetSecretResponse, error)
2) The list of supported secrets stores is incomplete and needs updating. Need to add the local stores to this list. Would also be good to provide links to the documentation for each of these stores, such as the docs for AWS Secret Manager, in this table. See below
Currently supported secret stores are:
Kubernetes
Hashicorp Vault
Azure KeyVault
AWS Secret manager
GCP Cloud KMS
GCP Secret Manager
|
non_process
|
update the secrets component documentation get secrets interface documentation needs updating this pr needs to be applied to this document type secretstore interface init authenticates with the actual secret store and performs other init operation init metadata metadata error getsecret retrieves a secret using a key and returns a map of decrypted string string values getsecret req getsecretrequest getsecretresponse error bulkgetsecrets retrieves all secrets in the store and returns a map of decrypted string string values bulkgetsecret req bulkgetsecretrequest getsecretresponse error the list of supported secrets stores is incomplete and needs updating need to add the local stores to this list would also be good to provide links to the documentation for each of these stores such as the docs for aws secret manager in this table see below currently supported secret stores are kubernetes hashicorp vault azure keyvault aws secret manager gcp cloud kms gcp secret manager
| 0
|
75,102
| 20,626,778,739
|
IssuesEvent
|
2022-03-07 23:42:36
|
PyAV-Org/PyAV
|
https://api.github.com/repos/PyAV-Org/PyAV
|
closed
|
pkg-config returned flags we don't understand: -pthread -pthread
|
build
|
## Overview
When installing PyAV with the following command:
pip install av --no-binary av
The log shows the error:
pkg-config returned flags we don't understand: -pthread -pthread
This is due to incorrect passing of the result of `pkg-config` which outputs lines like:
root@cd3ee8976f50:/# pkg-config --cflags --libs libavformat
-lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavcodec
-lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavdevice
-lavdevice -lm -lxcb -lxcb-shm -lxcb-shape -lxcb-xfixes -lasound -lSDL2 -lsndio -lXv -lX11 -lXext -lavfilter -pthread -lm -lnppig -lnppicc -lnppc -lnppidei -lva -ldl -lswscale -lm -lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavutil
-lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavfilter
-lavfilter -pthread -lm -lnppig -lnppicc -lnppc -lnppidei -lva -ldl -lswscale -lm -lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libswscale
-lswscale -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
## Expected behavior
The build should not fail when `-pthread` is returned by `pkg-config --cflags --libs`
## Actual behavior
The build fails when `-pthread` is returned by `pkg-config --cflags --libs`
## Versions
- OS:
```# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
```
- PyAV build:
```
# python setup.py config --verbose
running config
PyAV: 8.0.2 (unknown commit)
Python: 3.6.9 (default, Jul 17 2020, 12:50:27) \n[GCC 8.4.0]
platform: Linux-4.15.0-76-generic-x86_64-with-Ubuntu-18.04-bionic
extension_extra:
include_dirs: [b'include']
libraries: [b'avformat', b'm', b'bz2', b'z', b'avcodec', b'lzma', b'crystalhd', b'va', b'dl', b'swresample', b'avutil', b'va-drm', b'va-x11', b'vdpau', b'X11', b'Xv', b'Xext', b'avdevice', b'xcb', b'xcb-shm', b'xcb-shape', b'xcb-xfixes', b'asound', b'SDL2', b'sndio', b'avfilter', b'nppig', b'nppicc', b'nppc', b'nppidei', b'swscale']
library_dirs: []
threads: [b'thread']
define_macros: []
runtime_library_dirs: []
config_macros:
PYAV_COMMIT_STR="unknown-commit"
PYAV_VERSION=8.0.2
PYAV_VERSION_STR="8.0.2"
```
- FFmpeg:
```
LD_LIBRARY_PATH=/usr/local/cuda-11.1/targets/x86_64-linux/lib/:$LD_LIBRARY_PATH ffmpeg -version
ffmpeg version n4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/usr --enable-nonfree --enable-cuda-nvcc --enable-libnpp --extra-cflags='-I/usr/local/cuda/include -I/usr/local/cuda-11.1/targets/x86_64-linux/include/' --extra-ldflags='-L/usr/local/cuda/lib64 -L/usr/local/cuda-11.1/targets/x86_64-linux/lib/'
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
```
## Research
I have done the following:
- [Y] Checked the [PyAV documentation](https://pyav.org/docs)
- [Y] Searched on [Google](https://www.google.com/search?q=pyav+how+do+I+foo)
- [Y] Searched on [Stack Overflow](https://stackoverflow.com/search?q=pyav)
- [Y] Looked through [old GitHub issues](https://github.com/PyAV-Org/PyAV/issues?&q=is%3Aissue)
- [N] Asked on [PyAV Gitter](https://gitter.im/PyAV-Org)
- [N] ... and waited 72 hours for a response.
## Additional context
I am using this library on the docker image:
```tensorflow/tensorflow:2.3.1-gpu```
|
1.0
|
pkg-config returned flags we don't understand: -pthread -pthread - ## Overview
When installing PyAV with the following command:
pip install av --no-binary av
The log shows the error:
pkg-config returned flags we don't understand: -pthread -pthread
This is due to incorrect passing of the result of `pkg-config` which outputs lines like:
root@cd3ee8976f50:/# pkg-config --cflags --libs libavformat
-lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavcodec
-lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavdevice
-lavdevice -lm -lxcb -lxcb-shm -lxcb-shape -lxcb-xfixes -lasound -lSDL2 -lsndio -lXv -lX11 -lXext -lavfilter -pthread -lm -lnppig -lnppicc -lnppc -lnppidei -lva -ldl -lswscale -lm -lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavutil
-lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libavfilter
-lavfilter -pthread -lm -lnppig -lnppicc -lnppc -lnppidei -lva -ldl -lswscale -lm -lavformat -lm -lbz2 -lz -lavcodec -pthread -lm -llzma -lcrystalhd -lz -lva -ldl -lswresample -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
root@cd3ee8976f50:/# pkg-config --cflags --libs libswscale
-lswscale -lm -lavutil -pthread -lva-drm -lva -lva-x11 -lva -lvdpau -lX11 -lm -lva -lXv -lX11 -lXext -ldl
## Expected behavior
The build should not fail when `-pthread` is returned by `pkg-config --cflags --libs`
## Actual behavior
The build fails when `-pthread` is returned by `pkg-config --cflags --libs`
## Versions
- OS:
```# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
```
- PyAV build:
```
# python setup.py config --verbose
running config
PyAV: 8.0.2 (unknown commit)
Python: 3.6.9 (default, Jul 17 2020, 12:50:27) \n[GCC 8.4.0]
platform: Linux-4.15.0-76-generic-x86_64-with-Ubuntu-18.04-bionic
extension_extra:
include_dirs: [b'include']
libraries: [b'avformat', b'm', b'bz2', b'z', b'avcodec', b'lzma', b'crystalhd', b'va', b'dl', b'swresample', b'avutil', b'va-drm', b'va-x11', b'vdpau', b'X11', b'Xv', b'Xext', b'avdevice', b'xcb', b'xcb-shm', b'xcb-shape', b'xcb-xfixes', b'asound', b'SDL2', b'sndio', b'avfilter', b'nppig', b'nppicc', b'nppc', b'nppidei', b'swscale']
library_dirs: []
threads: [b'thread']
define_macros: []
runtime_library_dirs: []
config_macros:
PYAV_COMMIT_STR="unknown-commit"
PYAV_VERSION=8.0.2
PYAV_VERSION_STR="8.0.2"
```
- FFmpeg:
```
LD_LIBRARY_PATH=/usr/local/cuda-11.1/targets/x86_64-linux/lib/:$LD_LIBRARY_PATH ffmpeg -version
ffmpeg version n4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/usr --enable-nonfree --enable-cuda-nvcc --enable-libnpp --extra-cflags='-I/usr/local/cuda/include -I/usr/local/cuda-11.1/targets/x86_64-linux/include/' --extra-ldflags='-L/usr/local/cuda/lib64 -L/usr/local/cuda-11.1/targets/x86_64-linux/lib/'
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
```
## Research
I have done the following:
- [Y] Checked the [PyAV documentation](https://pyav.org/docs)
- [Y] Searched on [Google](https://www.google.com/search?q=pyav+how+do+I+foo)
- [Y] Searched on [Stack Overflow](https://stackoverflow.com/search?q=pyav)
- [Y] Looked through [old GitHub issues](https://github.com/PyAV-Org/PyAV/issues?&q=is%3Aissue)
- [N] Asked on [PyAV Gitter](https://gitter.im/PyAV-Org)
- [N] ... and waited 72 hours for a response.
## Additional context
I am using this library on the docker image:
```tensorflow/tensorflow:2.3.1-gpu```
|
non_process
|
pkg config returned flags we don t understand pthread pthread overview when installing pyav with the following command pip install av no binary av the log shows the error pkg config returned flags we don t understand pthread pthread this is due to incorrect passing of the result of pkg config which outputs lines like root pkg config cflags libs libavformat lavformat lm lz lavcodec pthread lm llzma lcrystalhd lz lva ldl lswresample lm lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl root pkg config cflags libs libavcodec lavcodec pthread lm llzma lcrystalhd lz lva ldl lswresample lm lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl root pkg config cflags libs libavdevice lavdevice lm lxcb lxcb shm lxcb shape lxcb xfixes lasound lsndio lxv lxext lavfilter pthread lm lnppig lnppicc lnppc lnppidei lva ldl lswscale lm lavformat lm lz lavcodec pthread lm llzma lcrystalhd lz lva ldl lswresample lm lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl root pkg config cflags libs libavutil lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl root pkg config cflags libs libavfilter lavfilter pthread lm lnppig lnppicc lnppc lnppidei lva ldl lswscale lm lavformat lm lz lavcodec pthread lm llzma lcrystalhd lz lva ldl lswresample lm lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl root pkg config cflags libs libswscale lswscale lm lavutil pthread lva drm lva lva lva lvdpau lm lva lxv lxext ldl expected behavior the build should not fail when pthread is returned by pkg config cflags libs actual behavior the build fails when pthread is returned by pkg config cflags libs versions os lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename bionic pyav build python setup py config verbose running config pyav unknown commit python default jul n platform linux generic with ubuntu bionic extension extra include dirs libraries library dirs threads define macros runtime library dirs config macros pyav commit str unknown commit pyav version pyav version str ffmpeg ld library path usr local cuda targets linux lib ld library path ffmpeg version ffmpeg version copyright c the ffmpeg developers built with gcc ubuntu configuration prefix usr enable nonfree enable cuda nvcc enable libnpp extra cflags i usr local cuda include i usr local cuda targets linux include extra ldflags l usr local cuda l usr local cuda targets linux lib libavutil libavcodec libavformat libavdevice libavfilter libswscale libswresample research i have done the following checked the searched on searched on looked through asked on and waited hours for a response additional context i am using this library on the docker image tensorflow tensorflow gpu
| 0
|
22,630
| 31,876,929,218
|
IssuesEvent
|
2023-09-16 00:33:40
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@expo/cli 0.10.12 has 2 guarddog issues
|
npm-install-script npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"taskr release\",","location":"package/package.json:15","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const emulatorProcess = (0, _childProcess).spawn(whichEmulator(), [\n `@${device.name}`\n ], {\n stdio: \"ignore\",\n detached: true\n });","location":"package/build/src/start/platforms/android/emulator.js:73","message":"This package is silently executing another executable"}]}```
|
1.0
|
@expo/cli 0.10.12 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"taskr release\",","location":"package/package.json:15","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const emulatorProcess = (0, _childProcess).spawn(whichEmulator(), [\n `@${device.name}`\n ], {\n stdio: \"ignore\",\n detached: true\n });","location":"package/build/src/start/platforms/android/emulator.js:73","message":"This package is silently executing another executable"}]}```
|
process
|
expo cli has guarddog issues npm install script npm silent process execution n stdio ignore n detached true n location package build src start platforms android emulator js message this package is silently executing another executable
| 1
|
114,784
| 24,663,234,644
|
IssuesEvent
|
2022-10-18 08:23:34
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
new ugcId logic in master 0.19 matches when both are none
|
Bug Code Modding Unstable
|
### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [X] My issue happened while using mods.
### What happened?
Originally the content path logic matches `%ModDir:xxx%` first with workshop id (ignores if not present, `workshopid==0`), then package name then alt-names.
Now that `steamworkshopid` logic is changed to use the new generic `ugcid`, but `Option<T>` of `none` will equal `Option<T>` of `none`. This means all local mod packages will match each other (and also match vanilla).
The code affected (`Barotrauma/Barotrauma/BarotraumaShared/SharedSource/ContentManagement/ContentPath.cs`, lines`56-65`):
```
foreach (Identifier otherModName in otherMods)
{
Option<ContentPackageId> ugcId = ContentPackageId.Parse(otherModName.Value);
ContentPackage? otherMod =
allPackages.FirstOrDefault(p => ugcId == p.UgcId)
?? allPackages.FirstOrDefault(p => p.Name == otherModName)
?? allPackages.FirstOrDefault(p => p.NameMatches(otherModName))
?? throw new MissingContentPackageException(ContentPackage, otherModName.Value);
cachedValue = cachedValue.Replace(string.Format(OtherModDirFmt, otherModName.Value), Path.GetDirectoryName(otherMod.Path));
}
```
Suggested change:
```
allPackages.FirstOrDefault(p => ugcId.IsSome() && ugcId == p.UgcId)
```
### Reproduction steps
Have one local mod that references its contents with "%ModDir%" or "%ModDir:xxx%", and the resource will not load.
### Bug prevalence
Happens every time I play
### Version
0.19.10.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
```shell
I think things are clear enough.
```
|
1.0
|
new ugcId logic in master 0.19 matches when both are none - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [X] My issue happened while using mods.
### What happened?
Originally the content path logic matches `%ModDir:xxx%` first with workshop id (ignores if not present, `workshopid==0`), then package name then alt-names.
Now that `steamworkshopid` logic is changed to use the new generic `ugcid`, but `Option<T>` of `none` will equal `Option<T>` of `none`. This means all local mod packages will match each other (and also match vanilla).
The code affected (`Barotrauma/Barotrauma/BarotraumaShared/SharedSource/ContentManagement/ContentPath.cs`, lines`56-65`):
```
foreach (Identifier otherModName in otherMods)
{
Option<ContentPackageId> ugcId = ContentPackageId.Parse(otherModName.Value);
ContentPackage? otherMod =
allPackages.FirstOrDefault(p => ugcId == p.UgcId)
?? allPackages.FirstOrDefault(p => p.Name == otherModName)
?? allPackages.FirstOrDefault(p => p.NameMatches(otherModName))
?? throw new MissingContentPackageException(ContentPackage, otherModName.Value);
cachedValue = cachedValue.Replace(string.Format(OtherModDirFmt, otherModName.Value), Path.GetDirectoryName(otherMod.Path));
}
```
Suggested change:
```
allPackages.FirstOrDefault(p => ugcId.IsSome() && ugcId == p.UgcId)
```
### Reproduction steps
Have one local mod that references its contents with "%ModDir%" or "%ModDir:xxx%", and the resource will not load.
### Bug prevalence
Happens every time I play
### Version
0.19.10.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
```shell
I think things are clear enough.
```
|
non_process
|
new ugcid logic in master matches when both are none disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened originally the content path logic matches moddir xxx first with workshop id ignores if not present workshopid then package name then alt names now that steamworkshopid logic is changed to use the new generic ugcid but option of none will equal option of none this means all local mod packages will match each other and also match vanilla the code affected barotrauma barotrauma barotraumashared sharedsource contentmanagement contentpath cs lines foreach identifier othermodname in othermods option ugcid contentpackageid parse othermodname value contentpackage othermod allpackages firstordefault p ugcid p ugcid allpackages firstordefault p p name othermodname allpackages firstordefault p p namematches othermodname throw new missingcontentpackageexception contentpackage othermodname value cachedvalue cachedvalue replace string format othermoddirfmt othermodname value path getdirectoryname othermod path suggested change allpackages firstordefault p ugcid issome ugcid p ugcid reproduction steps have one local mod that references its contents with moddir or moddir xxx and the resource will not load bug prevalence happens every time i play version no response which operating system did you encounter this bug on windows relevant error messages and crash reports shell i think things are clear enough
| 0
|
1,074
| 3,541,436,838
|
IssuesEvent
|
2016-01-19 01:01:23
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
blur strength masking in the zoom module
|
enhancement video processing
|
use a source selector to get an image, transform it to monochrome, and use luminance to alter blur strength at each pixel.
see #273
|
1.0
|
blur strength masking in the zoom module - use a source selector to get an image, transform it to monochrome, and use luminance to alter blur strength at each pixel.
see #273
|
process
|
blur strength masking in the zoom module use a source selector to get an image transform it to monochrome and use luminance to alter blur strength at each pixel see
| 1
|
16,372
| 21,089,192,764
|
IssuesEvent
|
2022-04-04 01:29:50
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process kill() swallows synchronously known problems (does not error out)
|
child_process feature request stale
|
<!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v12.13.1
* **Platform**: Linux 5.3.11
* **Subsystem**: child_process
<!-- Please provide more details below this comment. -->
See https://gist.github.com/jgehrcke/ab4656353c1155173d2dde5ffceb0d0b for repro code. The output when executing this:
```
$ node kill_no_pid_repro_js.js
2019-11-26T16:49:50.509Z info: waiting for the startup error to be handled
2019-11-26T16:49:50.511Z info: loop iteration
2019-11-26T16:49:50.514Z error: 'error' handler: Error: spawn does-not-exist-- ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:264:19)
at onErrorNT (internal/child_process.js:456:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: 'ENOENT',
code: 'ENOENT',
syscall: 'spawn does-not-exist--',
path: 'does-not-exist--',
spawnargs: []
}
2019-11-26T16:49:50.521Z error: child process startup error
2019-11-26T16:49:50.521Z info: send SIGTERM
2019-11-26T16:49:51.523Z info: send SIGTERM
```
The log output with millisecond resolution shows the chronological order of events.
In the repro code I use an asynchronous startup error detection technique, using the "error" event. In the log output we can see that this event is being caught, indicating ENOENT (command/executable not found, as desired in this repro). That's great.
After this one can still call the [`kill()` method](https://nodejs.org/api/child_process.html#child_process_subprocess_kill_signal
) on the process. The call "succeeds", silently swallowing a problem. The problem as I would put it into words: "there is no process that you can kill here", and that problem should _not_ be silently swallowed, because it makes it too easy to write code with race conditions.
I believe that this should be considered a bug in NodeJS: this is a programmer error, i.e. this should throw an Error, as it would in other programming environments. If unhandled, this should crash the code, indicating to the programmer that their assumption that the process is alive was wrong. The programmer should be required to explicitly handle an error thrown by `kill()`.
It's not necessary to demonstrate the struggle, but maybe makes it easier to understand: the repro code calls kill() another time, about 1 second after the startup error had happened. That kill() also swallows the problem.
I don't know if internally the `kill()` method just is a noop in this case (where the runtime knows that there is no PID to issue a `kill()` system call to), or if it actually calls the system call and then swallows the ENOENT. But in both cases it makes a conscious choice, knows about the absence of the process, and hides the erroneous attempt from the programmer.
There could be two ways to check for the error synchronously:
- The underlying `kill()` system call does fail with ENOENT, if it is even executed.
- If the underlying `kill()` system call is not executed then the runtime seems to have internal state about the fact that the process is not there (I am pretty sure that this is what's happening, see my code comments in the repro code). That state could be used.
It is documented that
> The ChildProcess object may emit an 'error' event if the signal cannot be delivered
If this is meant to be _the_ only, reliable, documented way to find out that a `kill()` failed (not quite clear from the documentation) then I think the repro code is also quite insightful: in the repro code that event handler is not called after the erroneous `kill()`. That's in fact proven by the additional 1-second wait, during which I would expect that handler to be called.
In the repro code comments I have pointed out that the runtime magically detects that the child process is gone, and it terminates the code pre-maturely, to prevent an indefinitely long wait from happening.
That is, I think this issue mainly reveals a rather mean inconsistency where the runtime sometimes seems to consider the fact that there is no process, and sometimes it doesn't, making it difficult to write robust child process management code.
Note: currently there does not seem to be a documented synchronous way to detect a child process startup error (upon common system call errors such as ENOENT and EACCES), and no documented synchronous way to check for process "liveness", although both could be done synchronously with "fast" system calls. This might be strongly related to this topic here. Related: https://github.com/eclipse-theia/theia/pull/3447 and https://github.com/nodejs/help/issues/1191.
|
1.0
|
child_process kill() swallows synchronously known problems (does not error out) - <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v12.13.1
* **Platform**: Linux 5.3.11
* **Subsystem**: child_process
<!-- Please provide more details below this comment. -->
See https://gist.github.com/jgehrcke/ab4656353c1155173d2dde5ffceb0d0b for repro code. The output when executing this:
```
$ node kill_no_pid_repro_js.js
2019-11-26T16:49:50.509Z info: waiting for the startup error to be handled
2019-11-26T16:49:50.511Z info: loop iteration
2019-11-26T16:49:50.514Z error: 'error' handler: Error: spawn does-not-exist-- ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:264:19)
at onErrorNT (internal/child_process.js:456:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: 'ENOENT',
code: 'ENOENT',
syscall: 'spawn does-not-exist--',
path: 'does-not-exist--',
spawnargs: []
}
2019-11-26T16:49:50.521Z error: child process startup error
2019-11-26T16:49:50.521Z info: send SIGTERM
2019-11-26T16:49:51.523Z info: send SIGTERM
```
The log output with millisecond resolution shows the chronological order of events.
In the repro code I use an asynchronous startup error detection technique, using the "error" event. In the log output we can see that this event is being caught, indicating ENOENT (command/executable not found, as desired in this repro). That's great.
After this one can still call the [`kill()` method](https://nodejs.org/api/child_process.html#child_process_subprocess_kill_signal
) on the process. The call "succeeds", silently swallowing a problem. The problem as I would put it into words: "there is no process that you can kill here", and that problem should _not_ be silently swallowed, because it makes it too easy to write code with race conditions.
I believe that this should be considered a bug in NodeJS: this is a programmer error, i.e. this should throw an Error, as it would in other programming environments. If unhandled, this should crash the code, indicating to the programmer that their assumption that the process is alive was wrong. The programmer should be required to explicitly handle an error thrown by `kill()`.
It's not necessary to demonstrate the struggle, but maybe makes it easier to understand: the repro code calls kill() another time, about 1 second after the startup error had happened. That kill() also swallows the problem.
I don't know if internally the `kill()` method just is a noop in this case (where the runtime knows that there is no PID to issue a `kill()` system call to), or if it actually calls the system call and then swallows the ENOENT. But in both cases it makes a conscious choice, knows about the absence of the process, and hides the erroneous attempt from the programmer.
There could be two ways to check for the error synchronously:
- The underlying `kill()` system call does fail with ENOENT, if it is even executed.
- If the underlying `kill()` system call is not executed then the runtime seems to have internal state about the fact that the process is not there (I am pretty sure that this is what's happening, see my code comments in the repro code). That state could be used.
It is documented that
> The ChildProcess object may emit an 'error' event if the signal cannot be delivered
If this is meant to be _the_ only, reliable, documented way to find out that a `kill()` failed (not quite clear from the documentation) then I think the repro code is also quite insightful: in the repro code that event handler is not called after the erroneous `kill()`. That's in fact proven by the additional 1-second wait, during which I would expect that handler to be called.
In the repro code comments I have pointed out that the runtime magically detects that the child process is gone, and it terminates the code pre-maturely, to prevent an indefinitely long wait from happening.
That is, I think this issue mainly reveals a rather mean inconsistency where the runtime sometimes seems to consider the fact that there is no process, and sometimes it doesn't, making it difficult to write robust child process management code.
Note: currently there does not seem to be a documented synchronous way to detect a child process startup error (upon common system call errors such as ENOENT and EACCES), and no documented synchronous way to check for process "liveness", although both could be done synchronously with "fast" system calls. This might be strongly related to this topic here. Related: https://github.com/eclipse-theia/theia/pull/3447 and https://github.com/nodejs/help/issues/1191.
|
process
|
child process kill swallows synchronously known problems does not error out thank you for reporting a possible bug in node js please fill in as much of the template below as you can version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version platform linux subsystem child process see for repro code the output when executing this node kill no pid repro js js info waiting for the startup error to be handled info loop iteration error error handler error spawn does not exist enoent at process childprocess handle onexit internal child process js at onerrornt internal child process js at processticksandrejections internal process task queues js errno enoent code enoent syscall spawn does not exist path does not exist spawnargs error child process startup error info send sigterm info send sigterm the log output with millisecond resolution shows the chronological order of events in the repro code i use an asynchronous startup error detection technique using the error event in the log output we can see that this event is being caught indicating enoent command executable not found as desired in this repro that s great after this one can still call the on the process the call succeeds silently swallowing a problem the problem as i would put it into words there is no process that you can kill here and that problem should not be silently swallowed because it makes it too easy to write code with race conditions i believe that this should be considered a bug in nodejs this is a programmer error i e this should throw an error as it would in other programming environments if unhandled this should crash the code indicating to the programmer that their assumption that the process is alive was wrong the programmer should be required to explicitly handle an error thrown by kill it s not necessary to demonstrate the struggle but maybe makes it easier to understand the repro code calls kill another time about second after the startup error had happened that kill also swallows the problem i don t know if internally the kill method just is a noop in this case where the runtime knows that there is no pid to issue a kill system call to or if it actually calls the system call and then swallows the enoent but in both cases it makes a conscious choice knows about the absence of the process and hides the erroneous attempt from the programmer there could be two ways to check for the error synchronously the underlying kill system call does fail with enoent if it is even executed if the underlying kill system call is not executed then the runtime seems to have internal state about the fact that the process is not there i am pretty sure that this is what s happening see my code comments in the repro code that state could be used it is documented that the childprocess object may emit an error event if the signal cannot be delivered if this is meant to be the only reliable documented way to find out that a kill failed not quite clear from the documentation then i think the repro code is also quite insightful in the repro code that event handler is not called after the erroneous kill that s in fact proven by the additional second wait during which i would expect that handler to be called in the repro code comments i have pointed out that the runtime magically detects that the child process is gone and it terminates the code pre maturely to prevent an indefinitely long wait from happening that is i think this issue mainly reveals a rather mean inconsistency where the runtime sometimes seems to consider the fact that there is no process and sometimes it doesn t making it difficult to write robust child process management code note currently there does not seem to be a documented synchronous way to detect a child process startup error upon common system call errors such as enoent and eacces and no documented synchronous way to check for process liveness although both could be done synchronously with fast system calls this might be strongly related to this topic here related and
| 1
|
53,590
| 28,299,965,248
|
IssuesEvent
|
2023-04-10 04:38:38
|
jbukuts/next-blog
|
https://api.github.com/repos/jbukuts/next-blog
|
closed
|
Decrease emoji font size
|
enhancement performance
|
Ideally would like to MacOS emojis for uniform design. However, imported `TTF` file is massive at 45MB. Possible solutions:
- Create a new subset font file at each build with only needed glyphs encoded
- Simply switch to a custom component that imports statically hosted `.png` files
|
True
|
Decrease emoji font size - Ideally would like to MacOS emojis for uniform design. However, imported `TTF` file is massive at 45MB. Possible solutions:
- Create a new subset font file at each build with only needed glyphs encoded
- Simply switch to a custom component that imports statically hosted `.png` files
|
non_process
|
decrease emoji font size ideally would like to macos emojis for uniform design however imported ttf file is massive at possible solutions create a new subset font file at each build with only needed glyphs encoded simply switch to a custom component that imports statically hosted png files
| 0
|
10,769
| 13,564,597,342
|
IssuesEvent
|
2020-09-18 10:17:32
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
[Spanner]: Instance Database limit (100) reached easily.
|
api: spanner priority: p1 samples type: process
|
Alternatives:
- Keep decrementing the time to consider a DB stale.
- Currently at 48hrs which already doesn't give us time to check on weekend failures.
- If there's a lot of activity on the repo, and since all tests are executed on presubmit, we can always reach limit even if we delete tha databases inmediately after running the tests, we only need a set number of presubmit checks executing at the same time.
- @laljikanjareeya You mentioned something about there being leftover DBs from before. I just tried deleting them on the UI but they have backups, I will try and submit a PR just for the sake of deleting them later, tweaking the fixture code to include this once DBs whose name is my-db-<not a timestamp>, and then close the PR. Maybe that fixes it and we don't have to consider anything else.
- Consider creating an instance per test run, and deleting the stale instances after 72hours.
- It would address #1173
- What's the max amount of intances that we can have? We would created one per tests run, as opposed to many databases per test run?
- How long does it take to create an instance?
@skuruppu and @laljikanjareeya thoughst welcomed.
|
1.0
|
[Spanner]: Instance Database limit (100) reached easily. - Alternatives:
- Keep decrementing the time to consider a DB stale.
- Currently at 48hrs which already doesn't give us time to check on weekend failures.
- If there's a lot of activity on the repo, and since all tests are executed on presubmit, we can always reach limit even if we delete tha databases inmediately after running the tests, we only need a set number of presubmit checks executing at the same time.
- @laljikanjareeya You mentioned something about there being leftover DBs from before. I just tried deleting them on the UI but they have backups, I will try and submit a PR just for the sake of deleting them later, tweaking the fixture code to include this once DBs whose name is my-db-<not a timestamp>, and then close the PR. Maybe that fixes it and we don't have to consider anything else.
- Consider creating an instance per test run, and deleting the stale instances after 72hours.
- It would address #1173
- What's the max amount of intances that we can have? We would created one per tests run, as opposed to many databases per test run?
- How long does it take to create an instance?
@skuruppu and @laljikanjareeya thoughst welcomed.
|
process
|
instance database limit reached easily alternatives keep decrementing the time to consider a db stale currently at which already doesn t give us time to check on weekend failures if there s a lot of activity on the repo and since all tests are executed on presubmit we can always reach limit even if we delete tha databases inmediately after running the tests we only need a set number of presubmit checks executing at the same time laljikanjareeya you mentioned something about there being leftover dbs from before i just tried deleting them on the ui but they have backups i will try and submit a pr just for the sake of deleting them later tweaking the fixture code to include this once dbs whose name is my db and then close the pr maybe that fixes it and we don t have to consider anything else consider creating an instance per test run and deleting the stale instances after it would address what s the max amount of intances that we can have we would created one per tests run as opposed to many databases per test run how long does it take to create an instance skuruppu and laljikanjareeya thoughst welcomed
| 1
|
68,798
| 9,214,864,447
|
IssuesEvent
|
2019-03-10 23:17:33
|
strongbox/strongbox
|
https://api.github.com/repos/strongbox/strongbox
|
opened
|
Add instructions on how to build the code under Windows 7 and 10 in regards to long paths
|
documentation good first issue help wanted
|
# Task Description
Building the code under Windows 7 and 10 causes similar errors to occur:
```
E:\strongbox>git diff strongbox-storage/strongbox-storage-layout-providers/strongbox-storage-maven-layout/strongbox-storage-maven-layout-provider/src/test/java/org/carlspring/strongbox/providers/repository/BaseLocalStorageProxyRepositoryExpiredArtifactsCleanerTest.java: Filename too long
strongbox-storage/strongbox-storage-layout-providers/strongbox-storage-maven-layout/strongbox-storage-maven-layout-provider/src/test/java/org/carlspring/strongbox/providers/repository/RetryDownloadArtifactWithPermanentFailureStartingAtSomePointTest.java: Filename too long
```
There are workarounds for Windows that could be used and we need to add instructions to our [wiki](strongbox.github.io) on how to do this.
Please, note that the documentation pages are located under the [strongbox-docs)(https://github.com/strongbox/strongbox-docs) project.
# Useful Links
* [Windows Maximum Path Length Limitation](https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file#maximum-path-length-limitation)
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @fuss86
|
1.0
|
Add instructions on how to build the code under Windows 7 and 10 in regards to long paths - # Task Description
Building the code under Windows 7 and 10 causes similar errors to occur:
```
E:\strongbox>git diff strongbox-storage/strongbox-storage-layout-providers/strongbox-storage-maven-layout/strongbox-storage-maven-layout-provider/src/test/java/org/carlspring/strongbox/providers/repository/BaseLocalStorageProxyRepositoryExpiredArtifactsCleanerTest.java: Filename too long
strongbox-storage/strongbox-storage-layout-providers/strongbox-storage-maven-layout/strongbox-storage-maven-layout-provider/src/test/java/org/carlspring/strongbox/providers/repository/RetryDownloadArtifactWithPermanentFailureStartingAtSomePointTest.java: Filename too long
```
There are workarounds for Windows that could be used and we need to add instructions to our [wiki](strongbox.github.io) on how to do this.
Please, note that the documentation pages are located under the [strongbox-docs)(https://github.com/strongbox/strongbox-docs) project.
# Useful Links
* [Windows Maximum Path Length Limitation](https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file#maximum-path-length-limitation)
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @fuss86
|
non_process
|
add instructions on how to build the code under windows and in regards to long paths task description building the code under windows and causes similar errors to occur e strongbox git diff strongbox storage strongbox storage layout providers strongbox storage maven layout strongbox storage maven layout provider src test java org carlspring strongbox providers repository baselocalstorageproxyrepositoryexpiredartifactscleanertest java filename too long strongbox storage strongbox storage layout providers strongbox storage maven layout strongbox storage maven layout provider src test java org carlspring strongbox providers repository retrydownloadartifactwithpermanentfailurestartingatsomepointtest java filename too long there are workarounds for windows that could be used and we need to add instructions to our strongbox github io on how to do this please note that the documentation pages are located under the strongbox docs project useful links help points of contact carlspring sbespalov
| 0
|
1,543
| 4,153,724,793
|
IssuesEvent
|
2016-06-16 08:52:44
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Problem in hra processor with isrctn id
|
bug Processors
|
1-2 cases per 100
```
Processing error: TypeError('expected string or buffer',) [8] Traceback (most recent call last):
File "processors/base/processors/trial.py", line 38, in process_trial trial = extractors['extract_trial'](record)
File "processors/hra/extractors.py", line 27, in extract_trial 'isrctn': _clean_identifier(record['isrctn_id'], prefix='ISRCTN'),
File "processors/hra/extractors.py", line 93, in _clean_identifier if re.match(r'%s\d{3,}' % prefix, ident): File "/usr/local/lib/python2.7/re.py", line 141, in match return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
```
|
1.0
|
Problem in hra processor with isrctn id - 1-2 cases per 100
```
Processing error: TypeError('expected string or buffer',) [8] Traceback (most recent call last):
File "processors/base/processors/trial.py", line 38, in process_trial trial = extractors['extract_trial'](record)
File "processors/hra/extractors.py", line 27, in extract_trial 'isrctn': _clean_identifier(record['isrctn_id'], prefix='ISRCTN'),
File "processors/hra/extractors.py", line 93, in _clean_identifier if re.match(r'%s\d{3,}' % prefix, ident): File "/usr/local/lib/python2.7/re.py", line 141, in match return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
```
|
process
|
problem in hra processor with isrctn id cases per processing error typeerror expected string or buffer traceback most recent call last file processors base processors trial py line in process trial trial extractors record file processors hra extractors py line in extract trial isrctn clean identifier record prefix isrctn file processors hra extractors py line in clean identifier if re match r s d prefix ident file usr local lib re py line in match return compile pattern flags match string typeerror expected string or buffer
| 1
|
40,640
| 2,868,933,322
|
IssuesEvent
|
2015-06-05 22:02:45
|
dart-lang/pub
|
https://api.github.com/repos/dart-lang/pub
|
closed
|
Request to host package - bench 0.0.5
|
bug Fixed Priority-Medium Pub-HostRequest
|
_Originally opened as dart-lang/sdk#6715_
*This issue was originally filed by ross.m....@gmail.com*
_____
https://github.com/rmsmith/bench
FYI I've changed my email in the pubspec.yaml - in hindsight I prefer to have a dedicated email for the public rather than my personal email - I hope this doesn't cause any trouble to the system, but I figured I should get it changed sooner than later.
thanks for the support!
|
1.0
|
Request to host package - bench 0.0.5 - _Originally opened as dart-lang/sdk#6715_
*This issue was originally filed by ross.m....@gmail.com*
_____
https://github.com/rmsmith/bench
FYI I've changed my email in the pubspec.yaml - in hindsight I prefer to have a dedicated email for the public rather than my personal email - I hope this doesn't cause any trouble to the system, but I figured I should get it changed sooner than later.
thanks for the support!
|
non_process
|
request to host package bench originally opened as dart lang sdk this issue was originally filed by ross m gmail com fyi i ve changed my email in the pubspec yaml in hindsight i prefer to have a dedicated email for the public rather than my personal email i hope this doesn t cause any trouble to the system but i figured i should get it changed sooner than later thanks for the support
| 0
|
692,604
| 23,742,464,944
|
IssuesEvent
|
2022-08-31 13:31:43
|
Ithil-protocol/frontend
|
https://api.github.com/repos/Ithil-protocol/frontend
|
closed
|
Small numbers are rounded to zero
|
max priority task
|
In Margin Trading with a 1 DAI margin and 1x leverage the UI shows 0 WETH
|
1.0
|
Small numbers are rounded to zero - In Margin Trading with a 1 DAI margin and 1x leverage the UI shows 0 WETH
|
non_process
|
small numbers are rounded to zero in margin trading with a dai margin and leverage the ui shows weth
| 0
|
209,976
| 7,181,961,542
|
IssuesEvent
|
2018-02-01 07:59:54
|
triton/triton
|
https://api.github.com/repos/triton/triton
|
opened
|
meta.licenses rewrite
|
priority 2 rewrite
|
### stdenv
- [ ] Rewrite license evaluation to fix incompatible licenses
### lib/licenses.nix
- [ ] Implement redistributable boolean
- [ ] Implement booleans for license compatibility
|
1.0
|
meta.licenses rewrite - ### stdenv
- [ ] Rewrite license evaluation to fix incompatible licenses
### lib/licenses.nix
- [ ] Implement redistributable boolean
- [ ] Implement booleans for license compatibility
|
non_process
|
meta licenses rewrite stdenv rewrite license evaluation to fix incompatible licenses lib licenses nix implement redistributable boolean implement booleans for license compatibility
| 0
|
47,772
| 25,181,724,281
|
IssuesEvent
|
2022-11-11 14:16:37
|
hajimehoshi/ebiten
|
https://api.github.com/repos/hajimehoshi/ebiten
|
closed
|
internal/atlas: non-square texture atlas
|
performance
|
This is a feedback from @nadimkobeissi.
For a very long thin image, internal texture altases are not efficient.
Currently, an atlas.Image is square. The suggestion is to enable rectangle. Edge lengths are always power of 2.
|
True
|
internal/atlas: non-square texture atlas - This is a feedback from @nadimkobeissi.
For a very long thin image, internal texture altases are not efficient.
Currently, an atlas.Image is square. The suggestion is to enable rectangle. Edge lengths are always power of 2.
|
non_process
|
internal atlas non square texture atlas this is a feedback from nadimkobeissi for a very long thin image internal texture altases are not efficient currently an atlas image is square the suggestion is to enable rectangle edge lengths are always power of
| 0
|
46,327
| 5,795,984,448
|
IssuesEvent
|
2017-05-02 18:23:00
|
GoogleChrome/sw-helpers
|
https://api.github.com/repos/GoogleChrome/sw-helpers
|
closed
|
Silently failing sw-background-sync-queue tests
|
bug testing
|
**Library Affected**:
*sw-background-sync-queue*
While rewriting the build process, I was examining the tests for `sw-background-sync-queue` to make sure they continued to pass. I noticed that in the current `master` branch there are a number of failures that are logged to the JS console during test execution; they appear to the "silent" in that they don't cause any of the tests to fail. It's hard to copy the JS console text, but here's a screenshot from the current Chrome stable:
<img width="1364" alt="screen shot 2017-05-01 at 10 26 20 pm" src="https://cloud.githubusercontent.com/assets/1749548/25601910/8969cd24-2ebd-11e7-90ba-0ed5f3e582dd.png">
|
1.0
|
Silently failing sw-background-sync-queue tests - **Library Affected**:
*sw-background-sync-queue*
While rewriting the build process, I was examining the tests for `sw-background-sync-queue` to make sure they continued to pass. I noticed that in the current `master` branch there are a number of failures that are logged to the JS console during test execution; they appear to the "silent" in that they don't cause any of the tests to fail. It's hard to copy the JS console text, but here's a screenshot from the current Chrome stable:
<img width="1364" alt="screen shot 2017-05-01 at 10 26 20 pm" src="https://cloud.githubusercontent.com/assets/1749548/25601910/8969cd24-2ebd-11e7-90ba-0ed5f3e582dd.png">
|
non_process
|
silently failing sw background sync queue tests library affected sw background sync queue while rewriting the build process i was examining the tests for sw background sync queue to make sure they continued to pass i noticed that in the current master branch there are a number of failures that are logged to the js console during test execution they appear to the silent in that they don t cause any of the tests to fail it s hard to copy the js console text but here s a screenshot from the current chrome stable img width alt screen shot at pm src
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.