Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
81,179 | 10,226,876,353 | IssuesEvent | 2019-08-16 19:04:20 | w3c/aria-practices | https://api.github.com/repos/w3c/aria-practices | closed | Review new Structural Roles section in APG 1.2 | Needs Review documentation guidance | A first draft of a new [Structural Roles section](https://rawgit.com/w3c/aria-practices/apg-1.2/aria-practices.html#structural_roles) developed for APG 1.2 as described in issue #700 is complete and ready for broad review. | 1.0 | Review new Structural Roles section in APG 1.2 - A first draft of a new [Structural Roles section](https://rawgit.com/w3c/aria-practices/apg-1.2/aria-practices.html#structural_roles) developed for APG 1.2 as described in issue #700 is complete and ready for broad review. | non_test | review new structural roles section in apg a first draft of a new developed for apg as described in issue is complete and ready for broad review | 0 |
45,604 | 24,130,338,393 | IssuesEvent | 2022-09-21 06:49:18 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | (Network) cache should be periodically trimmed | performance | ## Overview
The below seems like it could be significantly trimmed down periodically without user intervention (pressing `Clear Cache` stripped Data/Cache to 96MB/0MB):
<img src="https://user-images.githubusercontent.com/13922417/191421373-c60fd393-690c-4557-a027-92a623bdf269.jpg" width="500" />
The number of entries in `about:cache` is significantly larger than on my desktop (below 2000 entries at a third of the size). Also, it might be worth noting that the "Storage in use" is significantly smaller than the Cache size above (even without all the cache under `Data`):

## Examples
Unfortunately I had to swap devices shortly after recording this info, but I managed to get some useful data.
There seemed to be many "expired" entries, such as over 4 months old:

Almost a year expired (also it was fetched after expired, is this a UI bug?):

There were also a **LOT** of these (possibly related to #26420?):
<img src="https://user-images.githubusercontent.com/13922417/191429056-b8e2a437-266e-4b5f-b491-44354d629405.jpg" width="500" />
Also quite a few from `sync.services.mozilla.com`, though the ones I looked at were small in size (<200B).
There were many other items that were only fetched a single time and were several months to a year old which could also be reasonably pruned.
## Device information
* Android 8.0
* Fenix version: 106.0a1
| True | (Network) cache should be periodically trimmed - ## Overview
The below seems like it could be significantly trimmed down periodically without user intervention (pressing `Clear Cache` stripped Data/Cache to 96MB/0MB):
<img src="https://user-images.githubusercontent.com/13922417/191421373-c60fd393-690c-4557-a027-92a623bdf269.jpg" width="500" />
The number of entries in `about:cache` is significantly larger than on my desktop (below 2000 entries at a third of the size). Also, it might be worth noting that the "Storage in use" is significantly smaller than the Cache size above (even without all the cache under `Data`):

## Examples
Unfortunately I had to swap devices shortly after recording this info, but I managed to get some useful data.
There seemed to be many "expired" entries, such as over 4 months old:

Almost a year expired (also it was fetched after expired, is this a UI bug?):

There were also a **LOT** of these (possibly related to #26420?):
<img src="https://user-images.githubusercontent.com/13922417/191429056-b8e2a437-266e-4b5f-b491-44354d629405.jpg" width="500" />
Also quite a few from `sync.services.mozilla.com`, though the ones I looked at were small in size (<200B).
There were many other items that were only fetched a single time and were several months to a year old which could also be reasonably pruned.
## Device information
* Android 8.0
* Fenix version: 106.0a1
| non_test | network cache should be periodically trimmed overview the below seems like it could be significantly trimmed down periodically without user intervention pressing clear cache stripped data cache to the number of entries in about cache is significantly larger than on my desktop below entries at a third of the size also it might be worth noting that the storage in use is significantly smaller than the cache size above even without all the cache under data examples unfortunately i had to swap devices shortly after recording this info but i managed to get some useful data there seemed to be many expired entries such as over months old almost a year expired also it was fetched after expired is this a ui bug there were also a lot of these possibly related to also quite a few from sync services mozilla com though the ones i looked at were small in size there were many other items that were only fetched a single time and were several months to a year old which could also be reasonably pruned device information android fenix version | 0 |
287,173 | 8,805,284,325 | IssuesEvent | 2018-12-26 18:39:26 | GoldenSoftwareLtd/gedemin | https://api.github.com/repos/GoldenSoftwareLtd/gedemin | closed | В группу ассортимента добавить поле Лимит. | Meat Priority-Medium Type-Enhancement | Originally reported on Google Code with ID 2189
```
Если добавляем сырьё как заменитель в рецепт, и если это сырьё уже входит в рецепт то
дать возможность добавить. Если добавляем не как заменитель, а как позицию рецепта,
то показать сообщение что сырьё уже присутствует.
```
Reported by `stasgm` on 2010-10-20 13:41:14
| 1.0 | В группу ассортимента добавить поле Лимит. - Originally reported on Google Code with ID 2189
```
Если добавляем сырьё как заменитель в рецепт, и если это сырьё уже входит в рецепт то
дать возможность добавить. Если добавляем не как заменитель, а как позицию рецепта,
то показать сообщение что сырьё уже присутствует.
```
Reported by `stasgm` on 2010-10-20 13:41:14
| non_test | в группу ассортимента добавить поле лимит originally reported on google code with id если добавляем сырьё как заменитель в рецепт и если это сырьё уже входит в рецепт то дать возможность добавить если добавляем не как заменитель а как позицию рецепта то показать сообщение что сырьё уже присутствует reported by stasgm on | 0 |
158,226 | 12,406,650,469 | IssuesEvent | 2020-05-21 19:31:52 | apache/incubator-mxnet | https://api.github.com/repos/apache/incubator-mxnet | opened | gelu_test disabled and flaky upon enabling | Bug Disabled test v2.0 | ## Description
https://github.com/apache/incubator-mxnet/blob/0210ce2c136afaa0f57666e5e1c659cab353f5f3/tests/python/unittest/test_gluon.py#L1423-L1434
is actually a no-op by mistake. Upon enabling the test as follows:
```
gelu = mx.gluon.nn.GELU()
def gelu_test(x):
CUBE_CONSTANT = 0.044715
ROOT_TWO_OVER_PI = 0.7978845608028654
def g(x):
return ROOT_TWO_OVER_PI * (x + CUBE_CONSTANT * x * x * x)
def f(x):
return 1.0 + mx.nd.tanh(g(x))
def gelu(x):
return 0.5 * x * f(x)
return [gelu(x_i) for x_i in x]
for test_point, ref_point in zip(gelu_test(point_to_validate), gelu(point_to_validate)):
assert test_point == ref_point
```
the tests fails frequently
```
[2020-05-21T19:13:03.252Z] for test_point, ref_point in zip(gelu_test(point_to_validate), gelu(point_to_validate)):
[2020-05-21T19:13:03.252Z] > assert test_point == ref_point
[2020-05-21T19:13:03.252Z] E assert \n[-0.04601725...ray 1 @cpu(0)> == \n[-0.04601722...ray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E -\n
[2020-05-21T19:13:03.252Z] E -[-0.04601725]\n
[2020-05-21T19:13:03.252Z] E -<NDArray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E +\n
[2020-05-21T19:13:03.252Z] E +[-0.04601722]\n
[2020-05-21T19:13:03.252Z] E +<NDArray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E Full diff:
[2020-05-21T19:13:03.252Z] E
[2020-05-21T19:13:03.252Z] E - [-0.04601725]
[2020-05-21T19:13:03.252Z] E ? ^
[2020-05-21T19:13:03.252Z] E + [-0.04601722]
[2020-05-21T19:13:03.252Z] E ? ^
[2020-05-21T19:13:03.252Z] E <NDArray 1 @cpu(0)>
```
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-cpu/branches/PR-18376/runs/5/nodes/363/steps/738/log/?start=0
| 1.0 | gelu_test disabled and flaky upon enabling - ## Description
https://github.com/apache/incubator-mxnet/blob/0210ce2c136afaa0f57666e5e1c659cab353f5f3/tests/python/unittest/test_gluon.py#L1423-L1434
is actually a no-op by mistake. Upon enabling the test as follows:
```
gelu = mx.gluon.nn.GELU()
def gelu_test(x):
CUBE_CONSTANT = 0.044715
ROOT_TWO_OVER_PI = 0.7978845608028654
def g(x):
return ROOT_TWO_OVER_PI * (x + CUBE_CONSTANT * x * x * x)
def f(x):
return 1.0 + mx.nd.tanh(g(x))
def gelu(x):
return 0.5 * x * f(x)
return [gelu(x_i) for x_i in x]
for test_point, ref_point in zip(gelu_test(point_to_validate), gelu(point_to_validate)):
assert test_point == ref_point
```
the tests fails frequently
```
[2020-05-21T19:13:03.252Z] for test_point, ref_point in zip(gelu_test(point_to_validate), gelu(point_to_validate)):
[2020-05-21T19:13:03.252Z] > assert test_point == ref_point
[2020-05-21T19:13:03.252Z] E assert \n[-0.04601725...ray 1 @cpu(0)> == \n[-0.04601722...ray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E -\n
[2020-05-21T19:13:03.252Z] E -[-0.04601725]\n
[2020-05-21T19:13:03.252Z] E -<NDArray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E +\n
[2020-05-21T19:13:03.252Z] E +[-0.04601722]\n
[2020-05-21T19:13:03.252Z] E +<NDArray 1 @cpu(0)>
[2020-05-21T19:13:03.252Z] E Full diff:
[2020-05-21T19:13:03.252Z] E
[2020-05-21T19:13:03.252Z] E - [-0.04601725]
[2020-05-21T19:13:03.252Z] E ? ^
[2020-05-21T19:13:03.252Z] E + [-0.04601722]
[2020-05-21T19:13:03.252Z] E ? ^
[2020-05-21T19:13:03.252Z] E <NDArray 1 @cpu(0)>
```
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-cpu/branches/PR-18376/runs/5/nodes/363/steps/738/log/?start=0
| test | gelu test disabled and flaky upon enabling description is actually a no op by mistake upon enabling the test as follows gelu mx gluon nn gelu def gelu test x cube constant root two over pi def g x return root two over pi x cube constant x x x def f x return mx nd tanh g x def gelu x return x f x return for test point ref point in zip gelu test point to validate gelu point to validate assert test point ref point the tests fails frequently for test point ref point in zip gelu test point to validate gelu point to validate assert test point ref point e assert n ray cpu n ray cpu e n e n e e n e n e e full diff e e e e e e | 1 |
175,345 | 21,300,986,660 | IssuesEvent | 2022-04-15 03:03:21 | mihorsky/intentionally-buggy-code | https://api.github.com/repos/mihorsky/intentionally-buggy-code | opened | CVE-2021-43138 (High) detected in async-2.6.3.tgz | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /buggy-webpack-app/package.json</p>
<p>Path to vulnerable library: /buggy-webpack-app/node_modules/async/package.json,/buggy-react-app/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- webpack-3.12.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (webpack): 4.0.0-beta.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-43138 (High) detected in async-2.6.3.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /buggy-webpack-app/package.json</p>
<p>Path to vulnerable library: /buggy-webpack-app/node_modules/async/package.json,/buggy-react-app/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- webpack-3.12.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (webpack): 4.0.0-beta.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in async tgz cve high severity vulnerability vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file buggy webpack app package json path to vulnerable library buggy webpack app node modules async package json buggy react app node modules async package json dependency hierarchy webpack tgz root library x async tgz vulnerable library found in base branch master vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution webpack beta step up your open source security game with whitesource | 0 |
16,930 | 3,576,200,032 | IssuesEvent | 2016-01-27 18:38:17 | MajkiIT/polish-ads-filter | https://api.github.com/repos/MajkiIT/polish-ads-filter | closed | www.groj.pl | cookies reguły gotowe/testowanie | cookies
napewno znajda sie jakies rekalmy
Ta strona korzysta z plików cookies zgodnie z ustawieniami Twojej przeglądarki.
Więcej informacji o celu ich wykorzystania i możliwości zmiany ustawień cookie znajdziesz w naszej Polityce prywatności. | 1.0 | www.groj.pl - cookies
napewno znajda sie jakies rekalmy
Ta strona korzysta z plików cookies zgodnie z ustawieniami Twojej przeglądarki.
Więcej informacji o celu ich wykorzystania i możliwości zmiany ustawień cookie znajdziesz w naszej Polityce prywatności. | test | cookies napewno znajda sie jakies rekalmy ta strona korzysta z plików cookies zgodnie z ustawieniami twojej przeglądarki więcej informacji o celu ich wykorzystania i możliwości zmiany ustawień cookie znajdziesz w naszej polityce prywatności | 1 |
623,416 | 19,667,188,533 | IssuesEvent | 2022-01-11 00:31:38 | NuGet/Home | https://api.github.com/repos/NuGet/Home | closed | [Bug]: Unable to install a different version in an existing PackageReference project. | Priority:1 Product:VS.Client Type:Bug Resolution:External Functionality:VisualStudioUI | ### NuGet Product Used
Visual Studio Package Management UI
### Product Version
Version 17.1.0 Preview 2.0 [31930.463.main]
### Worked before?
Yes
### Impact
I'm unable to use this version
### Repro Steps & Context
Root cause likely not on NuGet side, [1444702](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1444702)
1. Create a new console project (dotnet 6.0).
2. Open Project PM UI and install latest version of Newtonsoft.Json
3. Change to Installed tab
4. Change the version of Newtonsoft.Json and install
5. Close VS using the X on the upper right corner
6. Save the project changes
7. Open the project again
8. Open Project PM UI and install a different version of Newtonsoft.Json
To work around this you can install a new package and this will let you change the version of the packages again.
### Verbose Logs
```shell
Restoring packages for C:\Users\mruizmares\source\repos\ConsoleApp6\ConsoleApp6\ConsoleApp6.csproj...
Installing NuGet package Newtonsoft.Json 10.0.1.
System.NotSupportedException: Specified method is not supported.
at Microsoft.VisualStudio.ProjectSystem.ProjectSerialization.CachedProject.GetItemProvenance(String itemToMatch, String itemType, EvaluationContext evaluationContext)
at Microsoft.VisualStudio.ProjectSystem.Items.MSBuildGlobUtilities.TryGetLatestExactMetadataElement(ProjectItem item, ProjectRootElement containingProjectXml)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.ProjectMetadataElementCache.<GetValueAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.<GetProjectItemAndMetaElementAsync>d__25.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.<SetPropertyValueAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemPropertiesWithCatalog.<SetPropertyValueAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ProjectPropertiesBase.<>c__DisplayClass31_0.<<Microsoft-VisualStudio-ProjectSystem-Properties-IProjectProperties-SetPropertyValueAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.Threading.JoinableTask.<JoinAsync>d__76.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.ProjectLockService.<ExecuteWithinLockAsync>d__129.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.VisualStudio.ProjectSystem.ProjectLockService.<ExecuteWithinLockAsync>d__129.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.CpsPackageReferenceProject.<InstallPackageAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteBuildIntegratedProjectActionsAsync>d__82.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__79.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__78.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__77.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.NuGetProjectManagerService.<>c__DisplayClass18_0.<<ExecuteActionsAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.NuGetProjectManagerService.<CatchAndRethrowExceptionAsync>d__28.MoveNext()
Time Elapsed: 00:00:01.1332797
========== Finished ==========
```
| 1.0 | [Bug]: Unable to install a different version in an existing PackageReference project. - ### NuGet Product Used
Visual Studio Package Management UI
### Product Version
Version 17.1.0 Preview 2.0 [31930.463.main]
### Worked before?
Yes
### Impact
I'm unable to use this version
### Repro Steps & Context
Root cause likely not on NuGet side, [1444702](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1444702)
1. Create a new console project (dotnet 6.0).
2. Open Project PM UI and install latest version of Newtonsoft.Json
3. Change to Installed tab
4. Change the version of Newtonsoft.Json and install
5. Close VS using the X on the upper right corner
6. Save the project changes
7. Open the project again
8. Open Project PM UI and install a different version of Newtonsoft.Json
To work around this you can install a new package and this will let you change the version of the packages again.
### Verbose Logs
```shell
Restoring packages for C:\Users\mruizmares\source\repos\ConsoleApp6\ConsoleApp6\ConsoleApp6.csproj...
Installing NuGet package Newtonsoft.Json 10.0.1.
System.NotSupportedException: Specified method is not supported.
at Microsoft.VisualStudio.ProjectSystem.ProjectSerialization.CachedProject.GetItemProvenance(String itemToMatch, String itemType, EvaluationContext evaluationContext)
at Microsoft.VisualStudio.ProjectSystem.Items.MSBuildGlobUtilities.TryGetLatestExactMetadataElement(ProjectItem item, ProjectRootElement containingProjectXml)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.ProjectMetadataElementCache.<GetValueAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.<GetProjectItemAndMetaElementAsync>d__25.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemProperties.<SetPropertyValueAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ItemPropertiesWithCatalog.<SetPropertyValueAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.Properties.ProjectPropertiesBase.<>c__DisplayClass31_0.<<Microsoft-VisualStudio-ProjectSystem-Properties-IProjectProperties-SetPropertyValueAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.Threading.JoinableTask.<JoinAsync>d__76.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.VisualStudio.ProjectSystem.ProjectLockService.<ExecuteWithinLockAsync>d__129.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.VisualStudio.ProjectSystem.ProjectLockService.<ExecuteWithinLockAsync>d__129.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.CpsPackageReferenceProject.<InstallPackageAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteBuildIntegratedProjectActionsAsync>d__82.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__79.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__78.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at NuGet.PackageManagement.NuGetPackageManager.<ExecuteNuGetProjectActionsAsync>d__77.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.NuGetProjectManagerService.<>c__DisplayClass18_0.<<ExecuteActionsAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NuGet.PackageManagement.VisualStudio.NuGetProjectManagerService.<CatchAndRethrowExceptionAsync>d__28.MoveNext()
Time Elapsed: 00:00:01.1332797
========== Finished ==========
```
| non_test | unable to install a different version in an existing packagereference project nuget product used visual studio package management ui product version version preview worked before yes impact i m unable to use this version repro steps context root cause likely not on nuget side create a new console project dotnet open project pm ui and install latest version of newtonsoft json change to installed tab change the version of newtonsoft json and install close vs using the x on the upper right corner save the project changes open the project again open project pm ui and install a different version of newtonsoft json to work around this you can install a new package and this will let you change the version of the packages again verbose logs shell restoring packages for c users mruizmares source repos csproj installing nuget package newtonsoft json system notsupportedexception specified method is not supported at microsoft visualstudio projectsystem projectserialization cachedproject getitemprovenance string itemtomatch string itemtype evaluationcontext evaluationcontext at microsoft visualstudio projectsystem items msbuildglobutilities trygetlatestexactmetadataelement projectitem item projectrootelement containingprojectxml at microsoft visualstudio projectsystem properties itemproperties projectmetadataelementcache d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at microsoft visualstudio projectsystem properties itemproperties d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft visualstudio projectsystem properties itemproperties d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft visualstudio projectsystem properties itempropertieswithcatalog d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft visualstudio projectsystem properties projectpropertiesbase c b d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft visualstudio threading joinabletask d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft visualstudio projectsystem projectlockservice d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at microsoft visualstudio projectsystem projectlockservice d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at nuget packagemanagement visualstudio cpspackagereferenceproject d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at nuget packagemanagement nugetpackagemanager d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at nuget packagemanagement nugetpackagemanager d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at nuget packagemanagement nugetpackagemanager d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at nuget packagemanagement nugetpackagemanager d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at nuget packagemanagement visualstudio nugetprojectmanagerservice c b d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at nuget packagemanagement visualstudio nugetprojectmanagerservice d movenext time elapsed finished | 0 |
125,597 | 10,347,642,902 | IssuesEvent | 2019-09-04 17:53:23 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | closed | Cromwell v33 released. Supports intelligent file localization, so we should merge forked WDLs | Mutect tests wdl |
## Feature request
### Tool(s) or class(es) involved
M2, at least
### Description
With Cromwell v33, we should be able to merge the mutect2.wdl and mutect_nio.wdl into one WDL. There will need to be WDL modifications, for sure.
https://github.com/broadinstitute/cromwell/releases
| 1.0 | Cromwell v33 released. Supports intelligent file localization, so we should merge forked WDLs -
## Feature request
### Tool(s) or class(es) involved
M2, at least
### Description
With Cromwell v33, we should be able to merge the mutect2.wdl and mutect_nio.wdl into one WDL. There will need to be WDL modifications, for sure.
https://github.com/broadinstitute/cromwell/releases
| test | cromwell released supports intelligent file localization so we should merge forked wdls feature request tool s or class es involved at least description with cromwell we should be able to merge the wdl and mutect nio wdl into one wdl there will need to be wdl modifications for sure | 1 |
33,891 | 7,293,862,348 | IssuesEvent | 2018-02-25 18:19:10 | otros-systems/otroslogviewer | https://api.github.com/repos/otros-systems/otroslogviewer | closed | Add option to tail to view as plain text as well | Priority-Medium Type-Defect | ```
Some time , just tailing will give a clean picture of actions in the
applicaiton modules
```
Original issue reported on code.google.com by `visuma...@gmail.com` on 12 Jun 2012 at 6:28
| 1.0 | Add option to tail to view as plain text as well - ```
Some time , just tailing will give a clean picture of actions in the
applicaiton modules
```
Original issue reported on code.google.com by `visuma...@gmail.com` on 12 Jun 2012 at 6:28
| non_test | add option to tail to view as plain text as well some time just tailing will give a clean picture of actions in the applicaiton modules original issue reported on code google com by visuma gmail com on jun at | 0 |
28,387 | 5,247,679,023 | IssuesEvent | 2017-02-01 13:49:11 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Boxed Doubles gives incorrect type names | defect | ``` csharp
object v = 1.0;
Global.Alert(v.GetType().FullName);
v = 1f;
Global.Alert(v.GetType().FullName);
```
[Live Bridge](http://live.bridge.net/#2c6e355eedc03a78b10f6751cf2c8fcb)
They both say System.Int when they should say System.Double and System.Single.
| 1.0 | Boxed Doubles gives incorrect type names - ``` csharp
object v = 1.0;
Global.Alert(v.GetType().FullName);
v = 1f;
Global.Alert(v.GetType().FullName);
```
[Live Bridge](http://live.bridge.net/#2c6e355eedc03a78b10f6751cf2c8fcb)
They both say System.Int when they should say System.Double and System.Single.
| non_test | boxed doubles gives incorrect type names csharp object v global alert v gettype fullname v global alert v gettype fullname they both say system int when they should say system double and system single | 0 |
165,282 | 20,574,441,435 | IssuesEvent | 2022-03-04 01:57:51 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | opened | CVE-2022-25258 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2022-25258 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in drivers/usb/gadget/composite.c in the Linux kernel before 5.16.10. The USB Gadget subsystem lacks certain validation of interface OS descriptor requests (ones with a large array index and ones associated with NULL function pointer retrieval). Memory corruption might occur.
<p>Publish Date: 2022-02-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25258>CVE-2022-25258</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25258">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25258</a></p>
<p>Release Date: 2022-02-16</p>
<p>Fix Resolution: v5.17-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-25258 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2022-25258 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in drivers/usb/gadget/composite.c in the Linux kernel before 5.16.10. The USB Gadget subsystem lacks certain validation of interface OS descriptor requests (ones with a large array index and ones associated with NULL function pointer retrieval). Memory corruption might occur.
<p>Publish Date: 2022-02-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25258>CVE-2022-25258</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25258">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25258</a></p>
<p>Release Date: 2022-02-16</p>
<p>Fix Resolution: v5.17-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files vulnerability details an issue was discovered in drivers usb gadget composite c in the linux kernel before the usb gadget subsystem lacks certain validation of interface os descriptor requests ones with a large array index and ones associated with null function pointer retrieval memory corruption might occur publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
166,128 | 12,891,440,064 | IssuesEvent | 2020-07-13 17:44:19 | astropy/astropy | https://api.github.com/repos/astropy/astropy | opened | TST: pyinstaller cron job failing with deprecated test helpers | Bug testing | <!-- This comments are hidden when you submit the issue,
so you do not need to remove them! -->
<!-- Please be sure to check out our contributing guidelines,
https://github.com/astropy/astropy/blob/master/CONTRIBUTING.md .
Please be sure to check out our code of conduct,
https://github.com/astropy/astropy/blob/master/CODE_OF_CONDUCT.md . -->
<!-- Please have a search on our GitHub repository to see if a similar
issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied
by the resolution.
If not please go ahead and open an issue! -->
<!-- Please check that the development version still produces the same bug.
You can install development version with
pip install git+https://github.com/astropy/astropy
command. -->
### Description
<!-- Provide a general description of the bug. -->
Why is pyinstaller trying to use/collect deprecated test helper functions?
### Expected behavior
<!-- What did you expect to happen. -->
Tests run successfully.
### Actual behavior
<!-- What actually happened. -->
<!-- Was the output confusing or poorly described? -->
Example log: https://travis-ci.org/github/astropy/astropy/jobs/707690799
```
____ ERROR collecting .pyinstaller/astropy_tests/tests/disable_internet.py _____
astropy_tests/tests/disable_internet.py:21: in <module>
warn("The ``disable_internet`` module is no longer provided by astropy. It "
E astropy.utils.exceptions.AstropyDeprecationWarning: The ``disable_internet`` module is no longer provided by astropy. It is now available as ``pytest_remotedata.disable_internet``. However, developers are encouraged to avoid using this module directly. See <https://docs.astropy.org/en/latest/whatsnew/3.0.html#pytest-plugins> for more information.
_____ ERROR collecting .pyinstaller/astropy_tests/tests/plugins/display.py _____
astropy_tests/tests/plugins/display.py:16: in <module>
warnings.warn('The astropy.tests.plugins.display plugin has been deprecated. '
E astropy.utils.exceptions.AstropyDeprecationWarning: The astropy.tests.plugins.display plugin has been deprecated. See the pytest-astropy-header documentation for information on migrating to using pytest-astropy-header to customize the pytest header.
=========================== short test summary info ============================
ERROR astropy_tests/tests/disable_internet.py - astropy.utils.exceptions.Astr...
ERROR astropy_tests/tests/plugins/display.py - astropy.utils.exceptions.Astro...
!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!
``` | 1.0 | TST: pyinstaller cron job failing with deprecated test helpers - <!-- This comments are hidden when you submit the issue,
so you do not need to remove them! -->
<!-- Please be sure to check out our contributing guidelines,
https://github.com/astropy/astropy/blob/master/CONTRIBUTING.md .
Please be sure to check out our code of conduct,
https://github.com/astropy/astropy/blob/master/CODE_OF_CONDUCT.md . -->
<!-- Please have a search on our GitHub repository to see if a similar
issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied
by the resolution.
If not please go ahead and open an issue! -->
<!-- Please check that the development version still produces the same bug.
You can install development version with
pip install git+https://github.com/astropy/astropy
command. -->
### Description
<!-- Provide a general description of the bug. -->
Why is pyinstaller trying to use/collect deprecated test helper functions?
### Expected behavior
<!-- What did you expect to happen. -->
Tests run successfully.
### Actual behavior
<!-- What actually happened. -->
<!-- Was the output confusing or poorly described? -->
Example log: https://travis-ci.org/github/astropy/astropy/jobs/707690799
```
____ ERROR collecting .pyinstaller/astropy_tests/tests/disable_internet.py _____
astropy_tests/tests/disable_internet.py:21: in <module>
warn("The ``disable_internet`` module is no longer provided by astropy. It "
E astropy.utils.exceptions.AstropyDeprecationWarning: The ``disable_internet`` module is no longer provided by astropy. It is now available as ``pytest_remotedata.disable_internet``. However, developers are encouraged to avoid using this module directly. See <https://docs.astropy.org/en/latest/whatsnew/3.0.html#pytest-plugins> for more information.
_____ ERROR collecting .pyinstaller/astropy_tests/tests/plugins/display.py _____
astropy_tests/tests/plugins/display.py:16: in <module>
warnings.warn('The astropy.tests.plugins.display plugin has been deprecated. '
E astropy.utils.exceptions.AstropyDeprecationWarning: The astropy.tests.plugins.display plugin has been deprecated. See the pytest-astropy-header documentation for information on migrating to using pytest-astropy-header to customize the pytest header.
=========================== short test summary info ============================
ERROR astropy_tests/tests/disable_internet.py - astropy.utils.exceptions.Astr...
ERROR astropy_tests/tests/plugins/display.py - astropy.utils.exceptions.Astro...
!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!
``` | test | tst pyinstaller cron job failing with deprecated test helpers this comments are hidden when you submit the issue so you do not need to remove them please be sure to check out our contributing guidelines please be sure to check out our code of conduct please have a search on our github repository to see if a similar issue has already been posted if a similar issue is closed have a quick look to see if you are satisfied by the resolution if not please go ahead and open an issue please check that the development version still produces the same bug you can install development version with pip install git command description why is pyinstaller trying to use collect deprecated test helper functions expected behavior tests run successfully actual behavior example log error collecting pyinstaller astropy tests tests disable internet py astropy tests tests disable internet py in warn the disable internet module is no longer provided by astropy it e astropy utils exceptions astropydeprecationwarning the disable internet module is no longer provided by astropy it is now available as pytest remotedata disable internet however developers are encouraged to avoid using this module directly see for more information error collecting pyinstaller astropy tests tests plugins display py astropy tests tests plugins display py in warnings warn the astropy tests plugins display plugin has been deprecated e astropy utils exceptions astropydeprecationwarning the astropy tests plugins display plugin has been deprecated see the pytest astropy header documentation for information on migrating to using pytest astropy header to customize the pytest header short test summary info error astropy tests tests disable internet py astropy utils exceptions astr error astropy tests tests plugins display py astropy utils exceptions astro interrupted errors during collection | 1 |
729,725 | 25,140,555,184 | IssuesEvent | 2022-11-09 22:40:50 | vexxhost/magnum-cluster-api | https://api.github.com/repos/vexxhost/magnum-cluster-api | closed | Support `container_infra_prefix` | priority: critical | At the moment, all of the images are being pulled directly from the internet. This can be a problem in air-gapped environments or places where internet might not be reliable.
We've got to add `container_infra_prefix` in order to be able to use images from a local registry, and have a very clean script on how to load up said custom registry with all those images. | 1.0 | Support `container_infra_prefix` - At the moment, all of the images are being pulled directly from the internet. This can be a problem in air-gapped environments or places where internet might not be reliable.
We've got to add `container_infra_prefix` in order to be able to use images from a local registry, and have a very clean script on how to load up said custom registry with all those images. | non_test | support container infra prefix at the moment all of the images are being pulled directly from the internet this can be a problem in air gapped environments or places where internet might not be reliable we ve got to add container infra prefix in order to be able to use images from a local registry and have a very clean script on how to load up said custom registry with all those images | 0 |
2,179 | 5,028,582,262 | IssuesEvent | 2016-12-15 18:39:25 | Sage-Bionetworks/Genie | https://api.github.com/repos/Sage-Bionetworks/Genie | opened | remove/modify some fields from clinical release file, but not internal database | clinical data processing | Remove the following fields:
1. BIRTH_YEAR
2. SECONDARY_RACE
3. TERTIARY_RACE
4. ONCOTREE_PRIMARY_NODE
5. ONCOTREE_SECONDARY_NODE
Modification of AGE_AT_SEQ_REPORT from days to: FLOOR ([AGE_AT_SEQ_REPORT]/365.25)
| 1.0 | remove/modify some fields from clinical release file, but not internal database - Remove the following fields:
1. BIRTH_YEAR
2. SECONDARY_RACE
3. TERTIARY_RACE
4. ONCOTREE_PRIMARY_NODE
5. ONCOTREE_SECONDARY_NODE
Modification of AGE_AT_SEQ_REPORT from days to: FLOOR ([AGE_AT_SEQ_REPORT]/365.25)
| non_test | remove modify some fields from clinical release file but not internal database remove the following fields birth year secondary race tertiary race oncotree primary node oncotree secondary node modification of age at seq report from days to floor | 0 |
41,883 | 5,401,056,253 | IssuesEvent | 2017-02-27 23:48:43 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Intermittent "Failed to create server socket" when running tests on Travis | dev: tests | I'm guessing what triggered this is our random port selection when running the tests with coverage.
```dart
Shell: Could not start Observatory HTTP server:
Shell: SocketException: Failed to create server socket (OS Error: Address already in use, errno = 98), address = 127.0.0.1, port = 5432
Shell: #0 _NativeSocket.bind.<anonymous closure> (dart:io-patch/socket_patch.dart:524)
Shell: #1 _RootZone.runUnary (dart:async/zone.dart:1404)
Shell: #2 _FutureListener.handleValue (dart:async/future_impl.dart:131)
Shell: #3 _Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:637)
Shell: #4 _Future._propagateToListeners (dart:async/future_impl.dart:667)
Shell: #5 _Future._completeWithValue (dart:async/future_impl.dart:477)
Shell: #6 _Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:528)
Shell: #7 _microtaskLoop (dart:async/schedule_microtask.dart:41)
Shell: #8 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50)
Shell: #9 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:96)
Shell: #10 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:149)
```
Full log: [log.txt](https://github.com/flutter/flutter/files/688631/log.txt)
| 1.0 | Intermittent "Failed to create server socket" when running tests on Travis - I'm guessing what triggered this is our random port selection when running the tests with coverage.
```dart
Shell: Could not start Observatory HTTP server:
Shell: SocketException: Failed to create server socket (OS Error: Address already in use, errno = 98), address = 127.0.0.1, port = 5432
Shell: #0 _NativeSocket.bind.<anonymous closure> (dart:io-patch/socket_patch.dart:524)
Shell: #1 _RootZone.runUnary (dart:async/zone.dart:1404)
Shell: #2 _FutureListener.handleValue (dart:async/future_impl.dart:131)
Shell: #3 _Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:637)
Shell: #4 _Future._propagateToListeners (dart:async/future_impl.dart:667)
Shell: #5 _Future._completeWithValue (dart:async/future_impl.dart:477)
Shell: #6 _Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:528)
Shell: #7 _microtaskLoop (dart:async/schedule_microtask.dart:41)
Shell: #8 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50)
Shell: #9 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:96)
Shell: #10 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:149)
```
Full log: [log.txt](https://github.com/flutter/flutter/files/688631/log.txt)
| test | intermittent failed to create server socket when running tests on travis i m guessing what triggered this is our random port selection when running the tests with coverage dart shell could not start observatory http server shell socketexception failed to create server socket os error address already in use errno address port shell nativesocket bind dart io patch socket patch dart shell rootzone rununary dart async zone dart shell futurelistener handlevalue dart async future impl dart shell future propagatetolisteners handlevaluecallback dart async future impl dart shell future propagatetolisteners dart async future impl dart shell future completewithvalue dart async future impl dart shell future asynccomplete dart async future impl dart shell microtaskloop dart async schedule microtask dart shell startmicrotaskloop dart async schedule microtask dart shell runpendingimmediatecallback dart isolate patch isolate patch dart shell rawreceiveportimpl handlemessage dart isolate patch isolate patch dart full log | 1 |
16,699 | 2,935,037,397 | IssuesEvent | 2015-06-30 12:31:13 | firemodels/fds-smv | https://api.github.com/repos/firemodels/fds-smv | closed | Simulation Error - [forrt1: severe (157): Program Exception - access violation] | Priority-Medium Type-Defect | ```
FDS Version:6.1.2
SVN Revision Number:20564
Compile Date:21/01/2015
Smokeview Version/Revision:6.1.12
Operating System: windows 8
When evac commands are added to the input file containing radiative heat flux gas device,
simulation error occurs. See the simple test file attached. Input file runs fine when
radiative heat flux gas devices are deleted.
```
Original issue reported on code.google.com by `daniel.pau89` on 2015-01-21 00:06:36
<hr>
* *Attachment: [radiative hf.txt](https://storage.googleapis.com/google-code-attachments/fds-smv/issue-2330/comment-0/radiative hf.txt)* | 1.0 | Simulation Error - [forrt1: severe (157): Program Exception - access violation] - ```
FDS Version:6.1.2
SVN Revision Number:20564
Compile Date:21/01/2015
Smokeview Version/Revision:6.1.12
Operating System: windows 8
When evac commands are added to the input file containing radiative heat flux gas device,
simulation error occurs. See the simple test file attached. Input file runs fine when
radiative heat flux gas devices are deleted.
```
Original issue reported on code.google.com by `daniel.pau89` on 2015-01-21 00:06:36
<hr>
* *Attachment: [radiative hf.txt](https://storage.googleapis.com/google-code-attachments/fds-smv/issue-2330/comment-0/radiative hf.txt)* | non_test | simulation error fds version svn revision number compile date smokeview version revision operating system windows when evac commands are added to the input file containing radiative heat flux gas device simulation error occurs see the simple test file attached input file runs fine when radiative heat flux gas devices are deleted original issue reported on code google com by daniel on attachment hf txt | 0 |
257,635 | 22,197,839,312 | IssuesEvent | 2022-06-07 08:34:13 | microsoft/code-with-engineering-playbook | https://api.github.com/repos/microsoft/code-with-engineering-playbook | closed | Automated testing workshop | testing workshop | **Is your feature request related to a problem? Please describe.**
The dev crew's testing needs vary depending on their engagements. But the challenge here is to be aware of the test types applicable to one’s engagement and how to upskill on those test types to deliver the right tests for the engagement.
**Describe the solution you'd like**
The group discussed creating a workshop to tackle this challenge. This task tracks the workshop proposal as a reference for other tasks.
One pager [Automated Testing EF workshop proposal.docx](https://github.com/microsoft/code-with-engineering-playbook/files/7542770/Automated.Testing.EF.workshop.proposal.docx)
| 1.0 | Automated testing workshop - **Is your feature request related to a problem? Please describe.**
The dev crew's testing needs vary depending on their engagements. But the challenge here is to be aware of the test types applicable to one’s engagement and how to upskill on those test types to deliver the right tests for the engagement.
**Describe the solution you'd like**
The group discussed creating a workshop to tackle this challenge. This task tracks the workshop proposal as a reference for other tasks.
One pager [Automated Testing EF workshop proposal.docx](https://github.com/microsoft/code-with-engineering-playbook/files/7542770/Automated.Testing.EF.workshop.proposal.docx)
| test | automated testing workshop is your feature request related to a problem please describe the dev crew s testing needs vary depending on their engagements but the challenge here is to be aware of the test types applicable to one’s engagement and how to upskill on those test types to deliver the right tests for the engagement describe the solution you d like the group discussed creating a workshop to tackle this challenge this task tracks the workshop proposal as a reference for other tasks one pager | 1 |
42,743 | 5,468,693,395 | IssuesEvent | 2017-03-10 07:19:26 | radare/radare2 | https://api.github.com/repos/radare/radare2 | reopened | Functions argument wrong recognition | bug has-test types | For example:
`xmalloc` is recognized as `malloc` because it contains "malloc" which is wrong for example
This caused https://github.com/radare/radare2/issues/6637#issuecomment-276950958
But the autorename function should still work for example `sub.strcoll_e1` should be recognized as `strcoll`. | 1.0 | Functions argument wrong recognition - For example:
`xmalloc` is recognized as `malloc` because it contains "malloc" which is wrong for example
This caused https://github.com/radare/radare2/issues/6637#issuecomment-276950958
But the autorename function should still work for example `sub.strcoll_e1` should be recognized as `strcoll`. | test | functions argument wrong recognition for example xmalloc is recognized as malloc because it contains malloc which is wrong for example this caused but the autorename function should still work for example sub strcoll should be recognized as strcoll | 1 |
297,974 | 25,778,255,274 | IssuesEvent | 2022-12-09 13:50:31 | eclipse-openj9/openj9 | https://api.github.com/repos/eclipse-openj9/openj9 | closed | JDK19 MauveMultiThrdLoad_5m_0_FAILED **FAILED** Process LT has timed out | test failure jdk19 | Failure link
------------
From an internal build `job/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal/6/tapResults/`(`paix908`):
```
11:59:14 openjdk version "19-internal" 2022-09-20
11:59:14 OpenJDK Runtime Environment (build 19-internal-adhoc.jenkins.BuildJDKnextppc64aixPersonal)
11:59:14 Eclipse OpenJ9 VM (build exclude19-52f04efbff5, JRE 19 AIX ppc64-64-Bit Compressed References 20220607_85 (JIT enabled, AOT enabled)
11:59:14 OpenJ9 - 52f04efbff5
11:59:14 OMR - c60867497c6
11:59:14 JCL - 5ccf02de16a based on jdk-19+25)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=sanity.system&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=&VENDOR_TEST_DIRS=functional&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=ppc64_aix&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&LABEL=&EXTRA_OPTIONS=&BUILD_IDENTIFIER=fengj%40ca.ibm.com&CUSTOMIZED_SDK_URL=https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDKnext_ppc64_aix_Personal%2F85%2FOpenJ9-JDKnext-ppc64_aix-20220607-105735.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDKnext_ppc64_aix_Personal%2F85%2Ftest-images.tar.gz&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=true&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=next&ITERATIONS=1&VENDOR_TEST_REPOS=git%40github.ibm.com%3Aruntimes%2Ftest.git&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk.git&OPENJ9_BRANCH=exclude19&OPENJ9_SHA=52f04efbff5d2f4613971cdd550c6802117f0fd7&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=master&OPENJ9_REPO=git%40github.com%3AJasonFengJ9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=8351d880d75daf01e631d8fc6abff75a40a68158&JDK_BRANCH=openj9&LABEL_ADDITION=ci.project.openj9&ARTIFACTORY_REPO=sys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=3&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=system&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=Dynamic) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
===============================================
Running test MauveMultiThrdLoad_5m_0 ...
===============================================
MauveMultiThrdLoad_5m_0 Start Time: Tue Jun 7 12:49:30 2022 Epoch Time (ms): 1654624170600
variation: Mode150
JVM_OPTIONS: -XX:+UseCompressedOops
STF 13:54:34.215 - Heartbeat: Process LT is still running
STF 13:54:36.222 - **FAILED** Process LT has timed out
STF 13:54:36.222 - Collecting dumps for: LT
STF 13:54:36.222 - Sending SIG 3 to the java process to generate a javacore
STF 13:56:36.277 - Monitoring Report Summary:
STF 13:56:36.277 - o Process LT has timed out
STF 13:56:36.278 - Killing processes: LT
STF 13:56:36.278 - o Process clean up attempt 1 for LT pid 10682842
STF 13:56:36.278 - o Process LT pid 10682842 stop()
STF 13:56:37.279 - o Process LT pid 10682842 killed
**FAILED** at step 1 (Run Mauve load test). Expected return value=0 Actual=1 at /home/jenkins/workspace/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal_testList_0/aqa-tests/TKG/../TKG/output_16546186426493/MauveMultiThrdLoad_5m_0/20220607-124930-MauveMultiThrdLoad/execute.pl line 95.
STF 13:56:37.768 - **FAILED** execute script failed. Expected return value=0 Actual=1
STF 13:56:37.768 -
STF 13:56:37.768 - ==================== T E A R D O W N ====================
STF 13:56:37.768 - Running teardown: perl /home/jenkins/workspace/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal_testList_0/aqa-tests/TKG/../TKG/output_16546186426493/MauveMultiThrdLoad_5m_0/20220607-124930-MauveMultiThrdLoad/tearDown.pl
STF 13:56:37.871 - TEARDOWN stage completed
STF 13:56:37.878 -
STF 13:56:37.878 - ===================== R E S U L T S =====================
STF 13:56:37.878 - Stage results:
STF 13:56:37.878 - setUp: pass
STF 13:56:37.878 - execute: *fail*
STF 13:56:37.878 - teardown: pass
STF 13:56:37.878 -
STF 13:56:37.878 - Overall result: **FAILED**
MauveMultiThrdLoad_5m_0_FAILED
``` | 1.0 | JDK19 MauveMultiThrdLoad_5m_0_FAILED **FAILED** Process LT has timed out - Failure link
------------
From an internal build `job/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal/6/tapResults/`(`paix908`):
```
11:59:14 openjdk version "19-internal" 2022-09-20
11:59:14 OpenJDK Runtime Environment (build 19-internal-adhoc.jenkins.BuildJDKnextppc64aixPersonal)
11:59:14 Eclipse OpenJ9 VM (build exclude19-52f04efbff5, JRE 19 AIX ppc64-64-Bit Compressed References 20220607_85 (JIT enabled, AOT enabled)
11:59:14 OpenJ9 - 52f04efbff5
11:59:14 OMR - c60867497c6
11:59:14 JCL - 5ccf02de16a based on jdk-19+25)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=sanity.system&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=&VENDOR_TEST_DIRS=functional&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=ppc64_aix&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&LABEL=&EXTRA_OPTIONS=&BUILD_IDENTIFIER=fengj%40ca.ibm.com&CUSTOMIZED_SDK_URL=https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDKnext_ppc64_aix_Personal%2F85%2FOpenJ9-JDKnext-ppc64_aix-20220607-105735.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDKnext_ppc64_aix_Personal%2F85%2Ftest-images.tar.gz&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=true&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=next&ITERATIONS=1&VENDOR_TEST_REPOS=git%40github.ibm.com%3Aruntimes%2Ftest.git&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk.git&OPENJ9_BRANCH=exclude19&OPENJ9_SHA=52f04efbff5d2f4613971cdd550c6802117f0fd7&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=master&OPENJ9_REPO=git%40github.com%3AJasonFengJ9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=8351d880d75daf01e631d8fc6abff75a40a68158&JDK_BRANCH=openj9&LABEL_ADDITION=ci.project.openj9&ARTIFACTORY_REPO=sys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=3&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=system&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=Dynamic) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
===============================================
Running test MauveMultiThrdLoad_5m_0 ...
===============================================
MauveMultiThrdLoad_5m_0 Start Time: Tue Jun 7 12:49:30 2022 Epoch Time (ms): 1654624170600
variation: Mode150
JVM_OPTIONS: -XX:+UseCompressedOops
STF 13:54:34.215 - Heartbeat: Process LT is still running
STF 13:54:36.222 - **FAILED** Process LT has timed out
STF 13:54:36.222 - Collecting dumps for: LT
STF 13:54:36.222 - Sending SIG 3 to the java process to generate a javacore
STF 13:56:36.277 - Monitoring Report Summary:
STF 13:56:36.277 - o Process LT has timed out
STF 13:56:36.278 - Killing processes: LT
STF 13:56:36.278 - o Process clean up attempt 1 for LT pid 10682842
STF 13:56:36.278 - o Process LT pid 10682842 stop()
STF 13:56:37.279 - o Process LT pid 10682842 killed
**FAILED** at step 1 (Run Mauve load test). Expected return value=0 Actual=1 at /home/jenkins/workspace/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal_testList_0/aqa-tests/TKG/../TKG/output_16546186426493/MauveMultiThrdLoad_5m_0/20220607-124930-MauveMultiThrdLoad/execute.pl line 95.
STF 13:56:37.768 - **FAILED** execute script failed. Expected return value=0 Actual=1
STF 13:56:37.768 -
STF 13:56:37.768 - ==================== T E A R D O W N ====================
STF 13:56:37.768 - Running teardown: perl /home/jenkins/workspace/Test_openjdkNext_j9_sanity.system_ppc64_aix_Personal_testList_0/aqa-tests/TKG/../TKG/output_16546186426493/MauveMultiThrdLoad_5m_0/20220607-124930-MauveMultiThrdLoad/tearDown.pl
STF 13:56:37.871 - TEARDOWN stage completed
STF 13:56:37.878 -
STF 13:56:37.878 - ===================== R E S U L T S =====================
STF 13:56:37.878 - Stage results:
STF 13:56:37.878 - setUp: pass
STF 13:56:37.878 - execute: *fail*
STF 13:56:37.878 - teardown: pass
STF 13:56:37.878 -
STF 13:56:37.878 - Overall result: **FAILED**
MauveMultiThrdLoad_5m_0_FAILED
``` | test | mauvemultithrdload failed failed process lt has timed out failure link from an internal build job test openjdknext sanity system aix personal tapresults openjdk version internal openjdk runtime environment build internal adhoc jenkins eclipse vm build jre aix bit compressed references jit enabled aot enabled omr jcl based on jdk change target to run only the failed test targets optional info failure output captured from console output running test mauvemultithrdload mauvemultithrdload start time tue jun epoch time ms variation jvm options xx usecompressedoops stf heartbeat process lt is still running stf failed process lt has timed out stf collecting dumps for lt stf sending sig to the java process to generate a javacore stf monitoring report summary stf o process lt has timed out stf killing processes lt stf o process clean up attempt for lt pid stf o process lt pid stop stf o process lt pid killed failed at step run mauve load test expected return value actual at home jenkins workspace test openjdknext sanity system aix personal testlist aqa tests tkg tkg output mauvemultithrdload mauvemultithrdload execute pl line stf failed execute script failed expected return value actual stf stf t e a r d o w n stf running teardown perl home jenkins workspace test openjdknext sanity system aix personal testlist aqa tests tkg tkg output mauvemultithrdload mauvemultithrdload teardown pl stf teardown stage completed stf stf r e s u l t s stf stage results stf setup pass stf execute fail stf teardown pass stf stf overall result failed mauvemultithrdload failed | 1 |
249,586 | 18,858,213,549 | IssuesEvent | 2021-11-12 09:30:43 | jackgugz/pe | https://api.github.com/repos/jackgugz/pe | opened | text size in Sequence Diagram for Restricting Commands based on GUI state is too small | severity.VeryLow type.DocumentationBug | 
as shown the text are too small and especially when compared with the text, making it hard to read.
<!--session: 1636703357773-ac7350ec-451f-4741-8da6-43b08c16732a-->
<!--Version: Web v3.4.1--> | 1.0 | text size in Sequence Diagram for Restricting Commands based on GUI state is too small - 
as shown the text are too small and especially when compared with the text, making it hard to read.
<!--session: 1636703357773-ac7350ec-451f-4741-8da6-43b08c16732a-->
<!--Version: Web v3.4.1--> | non_test | text size in sequence diagram for restricting commands based on gui state is too small as shown the text are too small and especially when compared with the text making it hard to read | 0 |
16,638 | 2,615,120,439 | IssuesEvent | 2015-03-01 05:46:42 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | BigQuery library | auto-migrated Component-Google-APIs Priority-Medium Type-Task | ```
External references, such as a standards document, or specification?
http://code.google.com/apis/bigquery/
https://www.googleapis.com/discovery/v1/apis/bigquery/v1/rest
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Need a generated library for BigQuery so I can update the sample.
```
Original issue reported on code.google.com by `yan...@google.com` on 13 May 2011 at 12:53 | 1.0 | BigQuery library - ```
External references, such as a standards document, or specification?
http://code.google.com/apis/bigquery/
https://www.googleapis.com/discovery/v1/apis/bigquery/v1/rest
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Need a generated library for BigQuery so I can update the sample.
```
Original issue reported on code.google.com by `yan...@google.com` on 13 May 2011 at 12:53 | non_test | bigquery library external references such as a standards document or specification java environments e g java android app engine or all all please describe the feature requested need a generated library for bigquery so i can update the sample original issue reported on code google com by yan google com on may at | 0 |
7,357 | 17,606,479,910 | IssuesEvent | 2021-08-17 17:46:25 | Azure/azure-sdk | https://api.github.com/repos/Azure/azure-sdk | opened | [stress testing] Event Hubs - send and receive for 1 week with network stress | architecture board-review | Send and receive for 1 week, with network stress via chaos-mesh.
- [ ] Track sent and received messages - all messages sent also arrive. | 1.0 | [stress testing] Event Hubs - send and receive for 1 week with network stress - Send and receive for 1 week, with network stress via chaos-mesh.
- [ ] Track sent and received messages - all messages sent also arrive. | non_test | event hubs send and receive for week with network stress send and receive for week with network stress via chaos mesh track sent and received messages all messages sent also arrive | 0 |
237,465 | 19,636,124,341 | IssuesEvent | 2022-01-08 10:04:28 | mangrovedao/mangrove | https://api.github.com/repos/mangrovedao/mangrove | opened | Flaky test: mangrove-solidity - test:polygon-mainnet - Deploy offerProxy | bug flaky test | The test "Deploy offerProxy" in the test:polygon-mainnet suite of the mangrove-solidity package, occasionally fails.
Here are some failed runs:
2022-01-04:
```
1) Deploy offerProxy
Offer proxy on aave:
AssertionError: Incorrect DAI lending balance on aave (truncated at 4 decimals):
Expected: 200.0
Given: 1000.000005452532923067
at assertAlmost (lib/libcommon.js:312:3)
at Object.expectAmountOnLender (lib/libcommon.js:684:5)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at runNextTicks (internal/process/task_queues.js:64:3)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async execLenderStrat (test/Exec/lenderStrats.js:53:3)
at async Context.<anonymous> (test/test-offerProxy.js:147:5)
```
https://github.com/mangrovedao/mangrove/runs/4701792082?check_suite_focus=true#step:11:298
2022-01-08:
```
1) Deploy offerProxy
Offer proxy on aave:
AssertionError: Snipe failed
at Object.snipeSuccess (lib/libcommon.js:522:3)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async execLenderStrat (test/Exec/lenderStrats.js:31:31)
at async Context.<anonymous> (test/test-offerProxy.js:147:5)
```
https://github.com/mangrovedao/mangrove/runs/4747252916?check_suite_focus=true#step:11:293 | 1.0 | Flaky test: mangrove-solidity - test:polygon-mainnet - Deploy offerProxy - The test "Deploy offerProxy" in the test:polygon-mainnet suite of the mangrove-solidity package, occasionally fails.
Here are some failed runs:
2022-01-04:
```
1) Deploy offerProxy
Offer proxy on aave:
AssertionError: Incorrect DAI lending balance on aave (truncated at 4 decimals):
Expected: 200.0
Given: 1000.000005452532923067
at assertAlmost (lib/libcommon.js:312:3)
at Object.expectAmountOnLender (lib/libcommon.js:684:5)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at runNextTicks (internal/process/task_queues.js:64:3)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async execLenderStrat (test/Exec/lenderStrats.js:53:3)
at async Context.<anonymous> (test/test-offerProxy.js:147:5)
```
https://github.com/mangrovedao/mangrove/runs/4701792082?check_suite_focus=true#step:11:298
2022-01-08:
```
1) Deploy offerProxy
Offer proxy on aave:
AssertionError: Snipe failed
at Object.snipeSuccess (lib/libcommon.js:522:3)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async execLenderStrat (test/Exec/lenderStrats.js:31:31)
at async Context.<anonymous> (test/test-offerProxy.js:147:5)
```
https://github.com/mangrovedao/mangrove/runs/4747252916?check_suite_focus=true#step:11:293 | test | flaky test mangrove solidity test polygon mainnet deploy offerproxy the test deploy offerproxy in the test polygon mainnet suite of the mangrove solidity package occasionally fails here are some failed runs deploy offerproxy offer proxy on aave assertionerror incorrect dai lending balance on aave truncated at decimals expected given at assertalmost lib libcommon js at object expectamountonlender lib libcommon js at runmicrotasks at processticksandrejections internal process task queues js at runnextticks internal process task queues js at listontimeout internal timers js at processtimers internal timers js at async execlenderstrat test exec lenderstrats js at async context test test offerproxy js deploy offerproxy offer proxy on aave assertionerror snipe failed at object snipesuccess lib libcommon js at runmicrotasks at processticksandrejections internal process task queues js at async execlenderstrat test exec lenderstrats js at async context test test offerproxy js | 1 |
331,118 | 28,507,778,047 | IssuesEvent | 2023-04-18 23:39:50 | InstituteforDiseaseModeling/PACE-HRH | https://api.github.com/repos/InstituteforDiseaseModeling/PACE-HRH | opened | Validate_cadre test passed message/plot | testing | Add a message or plot when the cadre sheet validation tests all pass.
A potential plot could be for each scenario, plot the start year and end year on x-axis and RoleID on y-axis (geom_point). | 1.0 | Validate_cadre test passed message/plot - Add a message or plot when the cadre sheet validation tests all pass.
A potential plot could be for each scenario, plot the start year and end year on x-axis and RoleID on y-axis (geom_point). | test | validate cadre test passed message plot add a message or plot when the cadre sheet validation tests all pass a potential plot could be for each scenario plot the start year and end year on x axis and roleid on y axis geom point | 1 |
141,752 | 12,977,516,890 | IssuesEvent | 2020-07-21 20:48:31 | GaloisInc/saw-script | https://api.github.com/repos/GaloisInc/saw-script | opened | Broken link in the HTML rendering of the SAW manual | bug documentation | Under "Offline Provers" in the SAW manual, there is a relative link to `../extcore.md`; this file exists where expected in the `doc` directory of `saw-script`, but whatever renders the Markdown as HTML for publication at https://saw.galois.com does not update this to a link to a rendered version of this additional documentation. | 1.0 | Broken link in the HTML rendering of the SAW manual - Under "Offline Provers" in the SAW manual, there is a relative link to `../extcore.md`; this file exists where expected in the `doc` directory of `saw-script`, but whatever renders the Markdown as HTML for publication at https://saw.galois.com does not update this to a link to a rendered version of this additional documentation. | non_test | broken link in the html rendering of the saw manual under offline provers in the saw manual there is a relative link to extcore md this file exists where expected in the doc directory of saw script but whatever renders the markdown as html for publication at does not update this to a link to a rendered version of this additional documentation | 0 |
98,162 | 8,675,031,678 | IssuesEvent | 2018-11-30 09:41:35 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | reopened | FXLabs Testing 30 : ApiV1JobsProjectIdIdGetPathParamIdMysqlSqlInjectionTimebound | FXLabs Testing 30 | Project : FXLabs Testing 30
Job : uat
Env : uat
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZTE0ZWI3YTEtMzFhMC00N2RiLTg2OTgtYjU3MjYxOGFmZTk2; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 09:37:19 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/jobs/project-id/
Request :
Response :
{
"timestamp" : "2018-11-30T09:37:19.999+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/jobs/project-id/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [645 < 7000 OR 645 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | FXLabs Testing 30 : ApiV1JobsProjectIdIdGetPathParamIdMysqlSqlInjectionTimebound - Project : FXLabs Testing 30
Job : uat
Env : uat
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZTE0ZWI3YTEtMzFhMC00N2RiLTg2OTgtYjU3MjYxOGFmZTk2; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 09:37:19 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/jobs/project-id/
Request :
Response :
{
"timestamp" : "2018-11-30T09:37:19.999+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/jobs/project-id/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [645 < 7000 OR 645 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | test | fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api jobs project id logs assertion resolved to result assertion resolved to result fx bot | 1 |
195,274 | 14,711,587,529 | IssuesEvent | 2021-01-05 07:40:12 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [Testerina] Function Mocking uses Illegal Reflective Access | Component/Testerina Priority/High Team/TestFramework | **Description:**
The current implementation of Function mocking makes use of reflection to access some classes, which throws a Warning when Function mocking is used.
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.ballerinalang.testerina.natives.mock.FunctionMock (file:/Users/mohamedaquibzulfikar/Desktop/TESTING_SLP6/ballerina-swan-lake-preview6-SNAPSHOT/distributions/ballerina-slp6/repo/balo/ballerina/test/0.0.0/platform/java11/testerina-core-2.0.0-Preview6-SNAPSHOT.jar) to field java.lang.ClassLoader.classes
WARNING: Please consider reporting this to the maintainers of org.ballerinalang.testerina.natives.mock.FunctionMock
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
The reflection is used when accessing the class name for the Mock method. This implementation needs to be improved.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 2.0 | [Testerina] Function Mocking uses Illegal Reflective Access - **Description:**
The current implementation of Function mocking makes use of reflection to access some classes, which throws a Warning when Function mocking is used.
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.ballerinalang.testerina.natives.mock.FunctionMock (file:/Users/mohamedaquibzulfikar/Desktop/TESTING_SLP6/ballerina-swan-lake-preview6-SNAPSHOT/distributions/ballerina-slp6/repo/balo/ballerina/test/0.0.0/platform/java11/testerina-core-2.0.0-Preview6-SNAPSHOT.jar) to field java.lang.ClassLoader.classes
WARNING: Please consider reporting this to the maintainers of org.ballerinalang.testerina.natives.mock.FunctionMock
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
The reflection is used when accessing the class name for the Mock method. This implementation needs to be improved.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| test | function mocking uses illegal reflective access description the current implementation of function mocking makes use of reflection to access some classes which throws a warning when function mocking is used warning an illegal reflective access operation has occurred warning illegal reflective access by org ballerinalang testerina natives mock functionmock file users mohamedaquibzulfikar desktop testing ballerina swan lake snapshot distributions ballerina repo balo ballerina test platform testerina core snapshot jar to field java lang classloader classes warning please consider reporting this to the maintainers of org ballerinalang testerina natives mock functionmock warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release the reflection is used when accessing the class name for the mock method this implementation needs to be improved steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
193,587 | 15,380,678,739 | IssuesEvent | 2021-03-02 21:27:41 | r-spatial/rgee | https://api.github.com/repos/r-spatial/rgee | closed | Error: unexpected 'repeat': Conflict of ee$Reducer$repeat versus R's reserved repeat? | documentation | I believe there is a conflict between `ee$Reducer$repeat` and R base `repeat`. The problem is that as `repeat` is a reserved word in R, using `repeat` will throw an error: *Error: unexpected 'repeat'*
A simple solution is to quote `'repeat'` like: `ee$Reducer$'repeat'`. This will then work. I guess it will be hard to solve this at the rgee level, so hopefully this post will help people facing the same issue?
```
library(rgee)
rgee::ee_Initialize()
point = ee$Geometry$Point(c(-103.44862, 25.90856))
box = point$buffer(50)$bounds()
coords = ee$List(box$coordinates()$get(0))
minMax = ee$Dictionary(coords$reduce(ee$Reducer$minMax()$repeat(2))) # does not work
minMax = ee$Dictionary(coords$reduce(ee$Reducer$minMax()$'repeat'(2))) # works
```
| 1.0 | Error: unexpected 'repeat': Conflict of ee$Reducer$repeat versus R's reserved repeat? - I believe there is a conflict between `ee$Reducer$repeat` and R base `repeat`. The problem is that as `repeat` is a reserved word in R, using `repeat` will throw an error: *Error: unexpected 'repeat'*
A simple solution is to quote `'repeat'` like: `ee$Reducer$'repeat'`. This will then work. I guess it will be hard to solve this at the rgee level, so hopefully this post will help people facing the same issue?
```
library(rgee)
rgee::ee_Initialize()
point = ee$Geometry$Point(c(-103.44862, 25.90856))
box = point$buffer(50)$bounds()
coords = ee$List(box$coordinates()$get(0))
minMax = ee$Dictionary(coords$reduce(ee$Reducer$minMax()$repeat(2))) # does not work
minMax = ee$Dictionary(coords$reduce(ee$Reducer$minMax()$'repeat'(2))) # works
```
| non_test | error unexpected repeat conflict of ee reducer repeat versus r s reserved repeat i believe there is a conflict between ee reducer repeat and r base repeat the problem is that as repeat is a reserved word in r using repeat will throw an error error unexpected repeat a simple solution is to quote repeat like ee reducer repeat this will then work i guess it will be hard to solve this at the rgee level so hopefully this post will help people facing the same issue library rgee rgee ee initialize point ee geometry point c box point buffer bounds coords ee list box coordinates get minmax ee dictionary coords reduce ee reducer minmax repeat does not work minmax ee dictionary coords reduce ee reducer minmax repeat works | 0 |
65,209 | 27,018,368,403 | IssuesEvent | 2023-02-10 21:54:11 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | [Bug]: create appflow_flow_flow resource with sap_odata connector as source not possible | bug service/appflow | ### Terraform Core Version
1.3.6
### AWS Provider Version
4.48.0
### Affected Resource(s)
* aws_appflow_flow
### Expected Behavior
When creating a new AppFlow Flow via Terraform "aws_appflow_flow" resource with "source_flow_config.connector_type" set to value "SAPOData" and respective "source_connector_properties" configured as required, the related AppFlow "flow" resource should be configured as expected.
### Actual Behavior
Trying to setup a new AppFlow Flow via Terraform "aws_appflow_flow" resource with "source_flow_config.connector_type" set to value "SAPOData" fails with Validation Exception and the statement, that the given "source_connector_properties" are invalid.
### Relevant Error/Panic Output Snippet
```shell
aws_appflow_flow.sap_to_s3_flow: Creating...
╷
│ Error: creating Appflow Flow (sap-rpq-fi-to-snowflake-data-flow): ValidationException: Create Flow request failed: [Invalid sourceConnectorProperties]
│
│ with aws_appflow_flow.sap_to_s3_flow,
│ on main.tf line 43, in resource "aws_appflow_flow" "sap_to_s3_flow":
│ 43: resource "aws_appflow_flow" "sap_to_s3_flow" {
│
╵
```
### Terraform Configuration Files
```
resource "aws_appflow_flow" "sap_to_s3_flow" {
name = var.flow_name
source_flow_config {
connector_type = "SAPOData"
connector_profile_name = var.connector_profile_name
source_connector_properties {
sapo_data {
object = var.sap_odata_object_path
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = var.destination_bucket_name
bucket_prefix = local.destination_bucket_file_prefix_configuration.static_file_prefix
s3_output_format_config {
aggregation_config {
aggregation_type = "None"
}
file_type = var.destination_bucket_file_type
}
}
}
}
task {
source_fields = [""]
task_type = "Map_all"
task_properties = {}
}
trigger_config {
trigger_type = "OnDemand"
}
}
```
### Steps to Reproduce
terraform apply
### Debug Output
_No response_
### Panic Output
_No response_
### Important Factoids
Please note the following additional information:
- According to the resources documentation https://registry.terraform.io/providers/hashicorp/aws/4.48.0/docs/resources/appflow_flow#sapodata-source-properties when having the connector type for the source flow config set to "SAPOData" a single (optional!) field called "object_path" is can be given within the type's respective "source_connector_properties" block. But when implementing it accordingly using IntelliJ IDEA the only available field is called "object" and not "object_path"
- Furthermore when trying to skip specifiying this field (as it should be optional accroding to the documentation) applying the resource fails with validation error telling me that the "object" field is missing but required (which also is an indicator that field-name as well as it's relevance (required or optional) is differenting from the documentation
- When checking the AWS API reference (https://docs.aws.amazon.com/appflow/1.0/APIReference/API_CreateFlow.html) the respective field is called "objectPath" as mentioned in the documentation, but still the code behaves differntly
- Have tried it on different machines, but collegues who were trying to apply my code ran into the same problem
This leads to the fact, that currently it is not possible to setup an AppFlow resource configured as explained via Terraform.
### References
_No response_
### Would you like to implement a fix?
yes | 1.0 | [Bug]: create appflow_flow_flow resource with sap_odata connector as source not possible - ### Terraform Core Version
1.3.6
### AWS Provider Version
4.48.0
### Affected Resource(s)
* aws_appflow_flow
### Expected Behavior
When creating a new AppFlow Flow via Terraform "aws_appflow_flow" resource with "source_flow_config.connector_type" set to value "SAPOData" and respective "source_connector_properties" configured as required, the related AppFlow "flow" resource should be configured as expected.
### Actual Behavior
Trying to setup a new AppFlow Flow via Terraform "aws_appflow_flow" resource with "source_flow_config.connector_type" set to value "SAPOData" fails with Validation Exception and the statement, that the given "source_connector_properties" are invalid.
### Relevant Error/Panic Output Snippet
```shell
aws_appflow_flow.sap_to_s3_flow: Creating...
╷
│ Error: creating Appflow Flow (sap-rpq-fi-to-snowflake-data-flow): ValidationException: Create Flow request failed: [Invalid sourceConnectorProperties]
│
│ with aws_appflow_flow.sap_to_s3_flow,
│ on main.tf line 43, in resource "aws_appflow_flow" "sap_to_s3_flow":
│ 43: resource "aws_appflow_flow" "sap_to_s3_flow" {
│
╵
```
### Terraform Configuration Files
```
resource "aws_appflow_flow" "sap_to_s3_flow" {
name = var.flow_name
source_flow_config {
connector_type = "SAPOData"
connector_profile_name = var.connector_profile_name
source_connector_properties {
sapo_data {
object = var.sap_odata_object_path
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = var.destination_bucket_name
bucket_prefix = local.destination_bucket_file_prefix_configuration.static_file_prefix
s3_output_format_config {
aggregation_config {
aggregation_type = "None"
}
file_type = var.destination_bucket_file_type
}
}
}
}
task {
source_fields = [""]
task_type = "Map_all"
task_properties = {}
}
trigger_config {
trigger_type = "OnDemand"
}
}
```
### Steps to Reproduce
terraform apply
### Debug Output
_No response_
### Panic Output
_No response_
### Important Factoids
Please note the following additional information:
- According to the resources documentation https://registry.terraform.io/providers/hashicorp/aws/4.48.0/docs/resources/appflow_flow#sapodata-source-properties when having the connector type for the source flow config set to "SAPOData" a single (optional!) field called "object_path" is can be given within the type's respective "source_connector_properties" block. But when implementing it accordingly using IntelliJ IDEA the only available field is called "object" and not "object_path"
- Furthermore when trying to skip specifiying this field (as it should be optional accroding to the documentation) applying the resource fails with validation error telling me that the "object" field is missing but required (which also is an indicator that field-name as well as it's relevance (required or optional) is differenting from the documentation
- When checking the AWS API reference (https://docs.aws.amazon.com/appflow/1.0/APIReference/API_CreateFlow.html) the respective field is called "objectPath" as mentioned in the documentation, but still the code behaves differntly
- Have tried it on different machines, but collegues who were trying to apply my code ran into the same problem
This leads to the fact, that currently it is not possible to setup an AppFlow resource configured as explained via Terraform.
### References
_No response_
### Would you like to implement a fix?
yes | non_test | create appflow flow flow resource with sap odata connector as source not possible terraform core version aws provider version affected resource s aws appflow flow expected behavior when creating a new appflow flow via terraform aws appflow flow resource with source flow config connector type set to value sapodata and respective source connector properties configured as required the related appflow flow resource should be configured as expected actual behavior trying to setup a new appflow flow via terraform aws appflow flow resource with source flow config connector type set to value sapodata fails with validation exception and the statement that the given source connector properties are invalid relevant error panic output snippet shell aws appflow flow sap to flow creating ╷ │ error creating appflow flow sap rpq fi to snowflake data flow validationexception create flow request failed │ │ with aws appflow flow sap to flow │ on main tf line in resource aws appflow flow sap to flow │ resource aws appflow flow sap to flow │ ╵ terraform configuration files resource aws appflow flow sap to flow name var flow name source flow config connector type sapodata connector profile name var connector profile name source connector properties sapo data object var sap odata object path destination flow config connector type destination connector properties bucket name var destination bucket name bucket prefix local destination bucket file prefix configuration static file prefix output format config aggregation config aggregation type none file type var destination bucket file type task source fields task type map all task properties trigger config trigger type ondemand steps to reproduce terraform apply debug output no response panic output no response important factoids please note the following additional information according to the resources documentation when having the connector type for the source flow config set to sapodata a single optional field called object path is can be given within the type s respective source connector properties block but when implementing it accordingly using intellij idea the only available field is called object and not object path furthermore when trying to skip specifiying this field as it should be optional accroding to the documentation applying the resource fails with validation error telling me that the object field is missing but required which also is an indicator that field name as well as it s relevance required or optional is differenting from the documentation when checking the aws api reference the respective field is called objectpath as mentioned in the documentation but still the code behaves differntly have tried it on different machines but collegues who were trying to apply my code ran into the same problem this leads to the fact that currently it is not possible to setup an appflow resource configured as explained via terraform references no response would you like to implement a fix yes | 0 |
224,906 | 7,473,778,895 | IssuesEvent | 2018-04-03 16:17:23 | EvictionLab/eviction-maps | https://api.github.com/repos/EvictionLab/eviction-maps | closed | Rankings aria-label attributes don't handle unavailable data | bug high priority | Unavailable data shows up as -1 in `aria-label` attributes currently | 1.0 | Rankings aria-label attributes don't handle unavailable data - Unavailable data shows up as -1 in `aria-label` attributes currently | non_test | rankings aria label attributes don t handle unavailable data unavailable data shows up as in aria label attributes currently | 0 |
347,946 | 31,388,530,703 | IssuesEvent | 2023-08-26 03:37:12 | Cookie-AutoDelete/Cookie-AutoDelete | https://api.github.com/repos/Cookie-AutoDelete/Cookie-AutoDelete | closed | Translation not loaded (gl) | untested bug/issue incomplete | ### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
CAD translation from Crowdin not loaded in the code
https://github.com/Cookie-AutoDelete/Cookie-AutoDelete/blob/3.X.X-Branch/extension/_locales/gl/messages.json
I've translated CAD many months ago (100%) but never got updated in my browser to include it, so came here to check if it was already fetched from crowdin and I see that translation file is empty.
### To Reproduce
install CAD in any browser (checked in firefox/debian and chromium/arch), set browser language to galician (gl)
```
- OS: x86-64 Linux (Arch)
- Browser Info: Firefox 116 (20230816233310)
- CookieAutoDelete Version: 3.8.2
```
### Expected Behavior
CAD UI should be using galician translation from crowdin
### Screenshots
_No response_
### System Info - Operating System (OS)
arch
### System Info - Browser Info
firefox 116
### System Info - CookieAutoDelete Version
3.8.2
### Additional Context
As translator I would like check and improve my contribution. As user I would like to used CAD using my language.
thnak you | 1.0 | Translation not loaded (gl) - ### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
CAD translation from Crowdin not loaded in the code
https://github.com/Cookie-AutoDelete/Cookie-AutoDelete/blob/3.X.X-Branch/extension/_locales/gl/messages.json
I've translated CAD many months ago (100%) but never got updated in my browser to include it, so came here to check if it was already fetched from crowdin and I see that translation file is empty.
### To Reproduce
install CAD in any browser (checked in firefox/debian and chromium/arch), set browser language to galician (gl)
```
- OS: x86-64 Linux (Arch)
- Browser Info: Firefox 116 (20230816233310)
- CookieAutoDelete Version: 3.8.2
```
### Expected Behavior
CAD UI should be using galician translation from crowdin
### Screenshots
_No response_
### System Info - Operating System (OS)
arch
### System Info - Browser Info
firefox 116
### System Info - CookieAutoDelete Version
3.8.2
### Additional Context
As translator I would like check and improve my contribution. As user I would like to used CAD using my language.
thnak you | test | translation not loaded gl acknowledgements i acknowledge that i have read the above items describe the bug cad translation from crowdin not loaded in the code i ve translated cad many months ago but never got updated in my browser to include it so came here to check if it was already fetched from crowdin and i see that translation file is empty to reproduce install cad in any browser checked in firefox debian and chromium arch set browser language to galician gl os linux arch browser info firefox cookieautodelete version expected behavior cad ui should be using galician translation from crowdin screenshots no response system info operating system os arch system info browser info firefox system info cookieautodelete version additional context as translator i would like check and improve my contribution as user i would like to used cad using my language thnak you | 1 |
259,930 | 22,577,670,560 | IssuesEvent | 2022-06-28 08:48:45 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | The action 'Detach' is not localized on the context menu for one attached storage account | 🧪 testing :beetle: regression :globe_with_meridians: localization | **Storage Explorer Version:** 1.25.0-dev
**Build Number:** 20220628.2
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Language:** All languages
**How Found:** From running test case
**Regression From:** Previous release (1.24.3)
## Steps to Reproduce ##
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '中文(简体)' -> Restart Storage Explorer.
3. Select one storage account -> Attach it.
4. Right click the attached storage account -> The context menu displays.
5. Check whether the action 'Detach' is localized.
## Expected Experience ##
The action 'Detach' is localized.

## Actual Experience ##
The action 'Detach' is not localized.

## Addition Context ##
This issue doesn't reproduce for attached services. | 1.0 | The action 'Detach' is not localized on the context menu for one attached storage account - **Storage Explorer Version:** 1.25.0-dev
**Build Number:** 20220628.2
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Language:** All languages
**How Found:** From running test case
**Regression From:** Previous release (1.24.3)
## Steps to Reproduce ##
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '中文(简体)' -> Restart Storage Explorer.
3. Select one storage account -> Attach it.
4. Right click the attached storage account -> The context menu displays.
5. Check whether the action 'Detach' is localized.
## Expected Experience ##
The action 'Detach' is localized.

## Actual Experience ##
The action 'Detach' is not localized.

## Addition Context ##
This issue doesn't reproduce for attached services. | test | the action detach is not localized on the context menu for one attached storage account storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey apple pro language all languages how found from running test case regression from previous release steps to reproduce launch storage explorer open settings application regional settings select 中文(简体) restart storage explorer select one storage account attach it right click the attached storage account the context menu displays check whether the action detach is localized expected experience the action detach is localized actual experience the action detach is not localized addition context this issue doesn t reproduce for attached services | 1 |
450,548 | 13,015,890,436 | IssuesEvent | 2020-07-26 02:30:15 | stax76/mpv.net | https://api.github.com/repos/stax76/mpv.net | closed | Multiple instance setting doesn't work | not an issue priority low | **Describe the bug**
I'm trying to set up mpv.net so I can play multiple different videos at once. Despite setting the `process-instance = multi` setting in the config I still only have a single instance running no matter what I try.
**To Reproduce**
Steps to reproduce the behavior:
1. Change the `process-instance` setting to `multi`
2. Open a video with mpv.net
3. Open a 2nd video with mpv.net
**Expected behavior**
A 2nd instance of mpv.net opens and plays the 2nd video.
**Actual behavior**
The video in the currently running instance changes instead.
**Additional context**
Windows 10 19041 (but existed in earlier updates too), mpv.net 5.4.4.0, using portable config
| 1.0 | Multiple instance setting doesn't work - **Describe the bug**
I'm trying to set up mpv.net so I can play multiple different videos at once. Despite setting the `process-instance = multi` setting in the config I still only have a single instance running no matter what I try.
**To Reproduce**
Steps to reproduce the behavior:
1. Change the `process-instance` setting to `multi`
2. Open a video with mpv.net
3. Open a 2nd video with mpv.net
**Expected behavior**
A 2nd instance of mpv.net opens and plays the 2nd video.
**Actual behavior**
The video in the currently running instance changes instead.
**Additional context**
Windows 10 19041 (but existed in earlier updates too), mpv.net 5.4.4.0, using portable config
| non_test | multiple instance setting doesn t work describe the bug i m trying to set up mpv net so i can play multiple different videos at once despite setting the process instance multi setting in the config i still only have a single instance running no matter what i try to reproduce steps to reproduce the behavior change the process instance setting to multi open a video with mpv net open a video with mpv net expected behavior a instance of mpv net opens and plays the video actual behavior the video in the currently running instance changes instead additional context windows but existed in earlier updates too mpv net using portable config | 0 |
241,683 | 26,256,883,037 | IssuesEvent | 2023-01-06 02:05:32 | trjahnke/the_hammer | https://api.github.com/repos/trjahnke/the_hammer | opened | CVE-2021-33503 (High) detected in urllib3-1.25.10-py2.py3-none-any.whl | security vulnerability | ## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.25.10-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.24.0-py2.py3-none-any.whl (Root Library)
- :x: **urllib3-1.25.10-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33503 (High) detected in urllib3-1.25.10-py2.py3-none-any.whl - ## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.25.10-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.24.0-py2.py3-none-any.whl (Root Library)
- :x: **urllib3-1.25.10-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy requests none any whl root library x none any whl vulnerable library found in base branch master vulnerability details an issue was discovered in before when provided with a url containing many characters in the authority component the authority regular expression exhibits catastrophic backtracking causing a denial of service if a url were passed as a parameter or redirected to via an http redirect publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
52,598 | 7,776,567,074 | IssuesEvent | 2018-06-05 08:29:46 | SenseNet/sensenet.github.io | https://api.github.com/repos/SenseNet/sensenet.github.io | opened | Guide docs | documentation | We have to review and rewrite our docs by the new guide-structure. These guides here should be well-structured, well organized, compact docs about a feature. | 1.0 | Guide docs - We have to review and rewrite our docs by the new guide-structure. These guides here should be well-structured, well organized, compact docs about a feature. | non_test | guide docs we have to review and rewrite our docs by the new guide structure these guides here should be well structured well organized compact docs about a feature | 0 |
193,539 | 14,656,108,824 | IssuesEvent | 2020-12-28 12:39:15 | isontheline/pro.webssh.net | https://api.github.com/repos/isontheline/pro.webssh.net | closed | Focus on a field doesn't scroll to the right field position | bug testflight-fixed | **Describe the bug**
On iPad, when a field (on connection form) is focused, the screen scroll but not to the right field position
- [x] Check library TPKeyboardAvoidingCollectionView
| 1.0 | Focus on a field doesn't scroll to the right field position - **Describe the bug**
On iPad, when a field (on connection form) is focused, the screen scroll but not to the right field position
- [x] Check library TPKeyboardAvoidingCollectionView
| test | focus on a field doesn t scroll to the right field position describe the bug on ipad when a field on connection form is focused the screen scroll but not to the right field position check library tpkeyboardavoidingcollectionview | 1 |
208,305 | 15,884,855,518 | IssuesEvent | 2021-04-09 19:34:24 | ZupIT/beagle | https://api.github.com/repos/ZupIT/beagle | closed | bug: fix analytics tests | android appium bug iOS tests | Appium's Analytics tests are failing on both Android and iOS. Scenarios 01 and 04.
Tested locally on macOS | 1.0 | bug: fix analytics tests - Appium's Analytics tests are failing on both Android and iOS. Scenarios 01 and 04.
Tested locally on macOS | test | bug fix analytics tests appium s analytics tests are failing on both android and ios scenarios and tested locally on macos | 1 |
230,957 | 18,726,548,430 | IssuesEvent | 2021-11-03 16:50:05 | w3c/wot-thing-description | https://api.github.com/repos/w3c/wot-thing-description | closed | What is the exact structure of TM placeholder identifiers? | Testing PR available Thing Model V1.1 | Regarding the identifiers of placeholders, currently the specification says :
> The string-based pattern of the placeholder MUST follow {{PLACEHOLDER_IDENTIFIER}}, which should contain a placeholder identifier name. The identifier name can be used to identify the placeholder for the substitution process. Placeholder can be only applied within the value of the JSON name-value pair and the value have to be typed as JSON string.
I think it is not clear in the following aspects:
- The requirement of starting with {{ and ending with }} is not clearly mentioned. It should be worded in the text with something like: Start and end with a double curly bracket
- The fact that the a string can contain a placeholder and does not have to only a placeholder, i.e. `"Thermostate No. {{THERMOSTATE_NUMBER}}"` is allowed. This is only understood via an example
- It does not have to be capital letters? Actually, even I am not sure about this.
- Allowed special characters. E.g. are the following allowed: `/`, `{`, `}`, `_`, `-`, `%` etc.
- Can it be only a special character, i.e. is the following allowed? `{{?}}` | 1.0 | What is the exact structure of TM placeholder identifiers? - Regarding the identifiers of placeholders, currently the specification says :
> The string-based pattern of the placeholder MUST follow {{PLACEHOLDER_IDENTIFIER}}, which should contain a placeholder identifier name. The identifier name can be used to identify the placeholder for the substitution process. Placeholder can be only applied within the value of the JSON name-value pair and the value have to be typed as JSON string.
I think it is not clear in the following aspects:
- The requirement of starting with {{ and ending with }} is not clearly mentioned. It should be worded in the text with something like: Start and end with a double curly bracket
- The fact that the a string can contain a placeholder and does not have to only a placeholder, i.e. `"Thermostate No. {{THERMOSTATE_NUMBER}}"` is allowed. This is only understood via an example
- It does not have to be capital letters? Actually, even I am not sure about this.
- Allowed special characters. E.g. are the following allowed: `/`, `{`, `}`, `_`, `-`, `%` etc.
- Can it be only a special character, i.e. is the following allowed? `{{?}}` | test | what is the exact structure of tm placeholder identifiers regarding the identifiers of placeholders currently the specification says the string based pattern of the placeholder must follow placeholder identifier which should contain a placeholder identifier name the identifier name can be used to identify the placeholder for the substitution process placeholder can be only applied within the value of the json name value pair and the value have to be typed as json string i think it is not clear in the following aspects the requirement of starting with and ending with is not clearly mentioned it should be worded in the text with something like start and end with a double curly bracket the fact that the a string can contain a placeholder and does not have to only a placeholder i e thermostate no thermostate number is allowed this is only understood via an example it does not have to be capital letters actually even i am not sure about this allowed special characters e g are the following allowed etc can it be only a special character i e is the following allowed | 1 |
238,413 | 18,240,844,768 | IssuesEvent | 2021-10-01 12:47:17 | girlscript/winter-of-contributing | https://api.github.com/repos/girlscript/winter-of-contributing | closed | ML 2.20 : Implement Gradient Descent without using any standard ML library like scikit-learn or more (D) | documentation GWOC21 ML Machine Learning | ### Description
### 📌 Issues for Week 2
Welcome to 'ML' Team, good to see you here
:arrow_forward: This issue will helps readers in acquiring all the knowledge that one needs to know about ----
### **_Implement Gradient Descent without using any standard ML library like scikit-learn or more_**.
:red_circle: To get assigned to this issue, add your **Batch Numbers** mentioned in the spreadsheet of "Machine Learning", the approach one would follow and choice you prefer (Documentation, Audio, Video). You can go with all three or any number of options you're interested to work on.
✔️ Domain : **Machine Learning**
:red_circle::yellow_circle: **Points to Note :**
- The issues will be assigned on a first come first serve basis, 1 Issue == 1 PR.
- "Issue Title" and "PR Title should be the same. Include issue number along with it.
- Changes should be made inside the `Machine Learning/Machine_Learning/Supervised_Machine_Learning` Branch.
- Follow Contributing Guidelines & Code of Conduct before start Contributing.
- This issue is only for 'GWOC' contributors of 'Machine Learning' domain.
- Dataset to be used : **IRIS DATASET**. (No other datasets will be entertained.)
************************************************************
:white_check_mark: **To be Mentioned while taking the issue :**
- Full name
- Batch Number
- GitHub Profile Link
- Which type of Contribution you want to make :
- [ ] Documentation (Coding in .ipynb file, show the implementation)
******************************************************************
Happy Contributing 🚀
All the best. Enjoy your open source journey ahead. 😎
### Domain
Machine Learning
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project. | 1.0 | ML 2.20 : Implement Gradient Descent without using any standard ML library like scikit-learn or more (D) - ### Description
### 📌 Issues for Week 2
Welcome to 'ML' Team, good to see you here
:arrow_forward: This issue will helps readers in acquiring all the knowledge that one needs to know about ----
### **_Implement Gradient Descent without using any standard ML library like scikit-learn or more_**.
:red_circle: To get assigned to this issue, add your **Batch Numbers** mentioned in the spreadsheet of "Machine Learning", the approach one would follow and choice you prefer (Documentation, Audio, Video). You can go with all three or any number of options you're interested to work on.
✔️ Domain : **Machine Learning**
:red_circle::yellow_circle: **Points to Note :**
- The issues will be assigned on a first come first serve basis, 1 Issue == 1 PR.
- "Issue Title" and "PR Title should be the same. Include issue number along with it.
- Changes should be made inside the `Machine Learning/Machine_Learning/Supervised_Machine_Learning` Branch.
- Follow Contributing Guidelines & Code of Conduct before start Contributing.
- This issue is only for 'GWOC' contributors of 'Machine Learning' domain.
- Dataset to be used : **IRIS DATASET**. (No other datasets will be entertained.)
************************************************************
:white_check_mark: **To be Mentioned while taking the issue :**
- Full name
- Batch Number
- GitHub Profile Link
- Which type of Contribution you want to make :
- [ ] Documentation (Coding in .ipynb file, show the implementation)
******************************************************************
Happy Contributing 🚀
All the best. Enjoy your open source journey ahead. 😎
### Domain
Machine Learning
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project. | non_test | ml implement gradient descent without using any standard ml library like scikit learn or more d description 📌 issues for week welcome to ml team good to see you here arrow forward this issue will helps readers in acquiring all the knowledge that one needs to know about implement gradient descent without using any standard ml library like scikit learn or more red circle to get assigned to this issue add your batch numbers mentioned in the spreadsheet of machine learning the approach one would follow and choice you prefer documentation audio video you can go with all three or any number of options you re interested to work on ✔️ domain machine learning red circle yellow circle points to note the issues will be assigned on a first come first serve basis issue pr issue title and pr title should be the same include issue number along with it changes should be made inside the machine learning machine learning supervised machine learning branch follow contributing guidelines code of conduct before start contributing this issue is only for gwoc contributors of machine learning domain dataset to be used iris dataset no other datasets will be entertained white check mark to be mentioned while taking the issue full name batch number github profile link which type of contribution you want to make documentation coding in ipynb file show the implementation happy contributing 🚀 all the best enjoy your open source journey ahead 😎 domain machine learning type of contribution documentation code of conduct i follow of this project | 0 |
345 | 2,499,847,969 | IssuesEvent | 2015-01-08 06:53:07 | fossology/fossology | https://api.github.com/repos/fossology/fossology | opened | Incorrect link on the Duplicate Bucketpool page | Category: UI Component: Rank Component: Tester Priority: Low Status: Resolved Tracker: Bug | ---
Author Name: **Mary Laser**
Original Redmine Issue: 2019, http://www.fossology.org/issues/2019
Original Date: 2012/05/19
Original Assignee: Mary Laser
---
The page displayed by http://<hostname>/repo/?mod=admin_bucket_pool, Duplicate Bucketpool, has an incorrect link. "Creating Bucket Pools" points to http://fossology.org/buckets.
I *think* it's suppose to point to http://www.fossology.org/projects/fossology/wiki/Buckets
| 1.0 | Incorrect link on the Duplicate Bucketpool page - ---
Author Name: **Mary Laser**
Original Redmine Issue: 2019, http://www.fossology.org/issues/2019
Original Date: 2012/05/19
Original Assignee: Mary Laser
---
The page displayed by http://<hostname>/repo/?mod=admin_bucket_pool, Duplicate Bucketpool, has an incorrect link. "Creating Bucket Pools" points to http://fossology.org/buckets.
I *think* it's suppose to point to http://www.fossology.org/projects/fossology/wiki/Buckets
| test | incorrect link on the duplicate bucketpool page author name mary laser original redmine issue original date original assignee mary laser the page displayed by duplicate bucketpool has an incorrect link creating bucket pools points to i think it s suppose to point to | 1 |
288,936 | 31,931,021,587 | IssuesEvent | 2023-09-19 07:25:59 | Trinadh465/linux-4.1.15_CVE-2023-4128 | https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128 | opened | CVE-2019-10638 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## CVE-2019-10638 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel before 5.1.7, a device can be tracked by an attacker using the IP ID values the kernel produces for connection-less protocols (e.g., UDP and ICMP). When such traffic is sent to multiple destination IP addresses, it is possible to obtain hash collisions (of indices to the counter array) and thereby obtain the hashing key (via enumeration). An attack may be conducted by hosting a crafted web page that uses WebRTC or gQUIC to force UDP traffic to attacker-controlled IP addresses.
<p>Publish Date: 2019-07-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-10638>CVE-2019-10638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638</a></p>
<p>Release Date: 2019-07-05</p>
<p>Fix Resolution: v5.1-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10638 (Medium) detected in multiple libraries - ## CVE-2019-10638 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel before 5.1.7, a device can be tracked by an attacker using the IP ID values the kernel produces for connection-less protocols (e.g., UDP and ICMP). When such traffic is sent to multiple destination IP addresses, it is possible to obtain hash collisions (of indices to the counter array) and thereby obtain the hashing key (via enumeration). An attack may be conducted by hosting a crafted web page that uses WebRTC or gQUIC to force UDP traffic to attacker-controlled IP addresses.
<p>Publish Date: 2019-07-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-10638>CVE-2019-10638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638</a></p>
<p>Release Date: 2019-07-05</p>
<p>Fix Resolution: v5.1-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux linuxlinux linuxlinux linuxlinux vulnerability details in the linux kernel before a device can be tracked by an attacker using the ip id values the kernel produces for connection less protocols e g udp and icmp when such traffic is sent to multiple destination ip addresses it is possible to obtain hash collisions of indices to the counter array and thereby obtain the hashing key via enumeration an attack may be conducted by hosting a crafted web page that uses webrtc or gquic to force udp traffic to attacker controlled ip addresses publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
345,835 | 30,846,918,822 | IssuesEvent | 2023-08-02 14:16:44 | ita-social-projects/Space2Study-Client-mvp | https://api.github.com/repos/ita-social-projects/Space2Study-Client-mvp | closed | (SP: 1) Write unit test for "AppTextField" component | FrontEnd part Unit test | ### Component unit test
Unit test for "**AppTextField**" component
Scenaries descriptions:
- [x] Should be rendered input with error message
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/app-text-field/AppTextField.spec.jsx)
Current coverage:
<img width="611" alt="image" src="https://github.com/ita-social-projects/Space2Study-Client-mvp/assets/90138904/d5896658-2c38-475b-a9ab-addda26c1857"> | 1.0 | (SP: 1) Write unit test for "AppTextField" component - ### Component unit test
Unit test for "**AppTextField**" component
Scenaries descriptions:
- [x] Should be rendered input with error message
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/app-text-field/AppTextField.spec.jsx)
Current coverage:
<img width="611" alt="image" src="https://github.com/ita-social-projects/Space2Study-Client-mvp/assets/90138904/d5896658-2c38-475b-a9ab-addda26c1857"> | test | sp write unit test for apptextfield component component unit test unit test for apptextfield component scenaries descriptions should be rendered input with error message current coverage img width alt image src | 1 |
199,976 | 15,085,010,705 | IssuesEvent | 2021-02-05 18:00:15 | GoogleCloudPlatform/bank-of-anthos | https://api.github.com/repos/GoogleCloudPlatform/bank-of-anthos | closed | Frontend readiness probe - master cluster | priority: p2 testing type: bug | Occasionally the bank-of-anthos.xyz frontend pod fails its readiness/liveness probes, causing a `502 Server Error` when users visit the site.
```
NAME READY STATUS RESTARTS AGE
frontend-58477bf67f-vndxc 0/1 Running 0 17d
```
Need to investigate why / whether a similar issue could happen with other long-running deployments using the upstream. | 1.0 | Frontend readiness probe - master cluster - Occasionally the bank-of-anthos.xyz frontend pod fails its readiness/liveness probes, causing a `502 Server Error` when users visit the site.
```
NAME READY STATUS RESTARTS AGE
frontend-58477bf67f-vndxc 0/1 Running 0 17d
```
Need to investigate why / whether a similar issue could happen with other long-running deployments using the upstream. | test | frontend readiness probe master cluster occasionally the bank of anthos xyz frontend pod fails its readiness liveness probes causing a server error when users visit the site name ready status restarts age frontend vndxc running need to investigate why whether a similar issue could happen with other long running deployments using the upstream | 1 |
189,538 | 14,513,642,847 | IssuesEvent | 2020-12-13 05:35:31 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | [Testerina] Function pointer argument support for Function mocking | Component/Testerina Team/TestFramework Type/Improvement | **Description:**
Function mocking needs to support having function pointers in the arguments.
Consider the following function to be mocked.
```
public isolated function print(string msg, *KeyValues keyValues) {
```
The current implementation does not support mocking the `keyValues` argument.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 2.0 | [Testerina] Function pointer argument support for Function mocking - **Description:**
Function mocking needs to support having function pointers in the arguments.
Consider the following function to be mocked.
```
public isolated function print(string msg, *KeyValues keyValues) {
```
The current implementation does not support mocking the `keyValues` argument.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| test | function pointer argument support for function mocking description function mocking needs to support having function pointers in the arguments consider the following function to be mocked public isolated function print string msg keyvalues keyvalues the current implementation does not support mocking the keyvalues argument steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
109,952 | 23,848,297,279 | IssuesEvent | 2022-09-06 15:36:35 | pwa-builder/PWABuilder | https://api.github.com/repos/pwa-builder/PWABuilder | closed | [VSCODE] Generated SW should be fully documented and explained | feature request :mailbox_with_mail: vscode documentation | ### Tell us about your feature idea
The generated service worker should be documented to teach the user what does and why
### Do you have an implementation or a solution in mind?
The top of the sw should have a comment block with an overview of sw and what it does. It should also link out to our docs for much more detail documentation and how to modify/change it to fit the user needs
### Have you considered any alternatives?
_No response_ | 1.0 | [VSCODE] Generated SW should be fully documented and explained - ### Tell us about your feature idea
The generated service worker should be documented to teach the user what does and why
### Do you have an implementation or a solution in mind?
The top of the sw should have a comment block with an overview of sw and what it does. It should also link out to our docs for much more detail documentation and how to modify/change it to fit the user needs
### Have you considered any alternatives?
_No response_ | non_test | generated sw should be fully documented and explained tell us about your feature idea the generated service worker should be documented to teach the user what does and why do you have an implementation or a solution in mind the top of the sw should have a comment block with an overview of sw and what it does it should also link out to our docs for much more detail documentation and how to modify change it to fit the user needs have you considered any alternatives no response | 0 |
107,365 | 9,210,709,275 | IssuesEvent | 2019-03-09 08:18:08 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Compiler panic with associated constant | A-associated-items C-bug E-needstest I-ICE P-high T-compiler | The compiler panics when I use a generic associated constant from within the impl.
I was able to reproduce it with this code:
```rust
struct S<T>(T);
impl<T> S<T> {
const ID: fn(&S<T>) -> &S<T> = |s| s;
pub fn id(&self) -> &Self {
Self::ID(self)
}
}
fn main() {
let s = S(10u32);
assert!(S::<u32>::ID(&s).0 == 10); // Works fine
assert!(s.id().0 == 10); // Causes compiler to panic
}
```
## Meta
`rustc --version --verbose`:
rustc 1.32.0 (9fda7c223 2019-01-16)
binary: rustc
commit-hash: 9fda7c2237db910e41d6a712e9a2139b352e558b
commit-date: 2019-01-16
host: x86_64-unknown-linux-gnu
release: 1.32.0
LLVM version: 8.0
Backtrace:
```
thread 'main' panicked at 'assertion failed: !value.needs_subst()', src/librustc/traits/query/normalize_erasing_regions.rs:69:9
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:211
3: std::panicking::default_hook
at src/libstd/panicking.rs:227
4: rustc::util::common::panic_hook
5: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:495
6: std::panicking::begin_panic
7: rustc::traits::query::normalize_erasing_regions::<impl rustc::ty::context::TyCtxt<'cx, 'tcx, 'tcx>>::normalize_erasing_late_bound_regions
8: rustc::ty::instance::Instance::resolve_closure
9: rustc_mir::interpret::cast::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::cast
10: rustc_mir::interpret::step::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::run
11: rustc_mir::const_eval::eval_body_using_ecx
12: rustc_mir::const_eval::const_eval_raw_provider
13: rustc::ty::query::__query_compute::const_eval_raw
14: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::const_eval_raw<'tcx>>::compute
15: rustc::dep_graph::graph::DepGraph::with_task_impl
16: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
17: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
18: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query
19: rustc::ty::query::TyCtxtAt::const_eval_raw
20: <rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::const_eval_raw
21: rustc_mir::interpret::operand::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::const_value_to_op
22: rustc_mir::const_eval::const_to_op
23: rustc_mir::transform::const_prop::ConstPropagator::eval_constant
24: <rustc_mir::transform::const_prop::ConstPropagator<'b, 'a, 'tcx> as rustc::mir::visit::Visitor<'tcx>>::visit_terminator_kind
25: <rustc_mir::transform::const_prop::ConstProp as rustc_mir::transform::MirPass>::run_pass
26: rustc_mir::transform::run_passes::{{closure}}
27: rustc_mir::transform::run_passes
28: rustc_mir::transform::optimized_mir
29: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::optimized_mir<'tcx>>::compute
30: rustc::dep_graph::graph::DepGraph::with_task_impl
31: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
32: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
33: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::try_get_query
34: rustc::ty::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::instance_mir
35: rustc_mir::monomorphize::collector::collect_items_rec
36: rustc_mir::monomorphize::collector::collect_items_rec
37: rustc_mir::monomorphize::collector::collect_crate_mono_items::{{closure}}
38: rustc::util::common::time
39: rustc_mir::monomorphize::collector::collect_crate_mono_items
40: rustc::util::common::time
41: rustc_mir::monomorphize::partitioning::collect_and_partition_mono_items
42: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::collect_and_partition_mono_items<'tcx>>::compute
43: rustc::dep_graph::graph::DepGraph::with_task_impl
44: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
45: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
46: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query
47: rustc_codegen_ssa::base::codegen_crate
48: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_utils::codegen_backend::CodegenBackend>::codegen_crate
49: rustc::util::common::time
50: rustc_driver::driver::phase_4_codegen
51: rustc_driver::driver::compile_input::{{closure}}
52: rustc::ty::context::tls::enter_context
53: <std::thread::local::LocalKey<T>>::with
54: rustc::ty::context::TyCtxt::create_and_enter
55: rustc_driver::driver::compile_input
56: rustc_driver::run_compiler_with_pool
57: <scoped_tls::ScopedKey<T>>::set
58: rustc_driver::run_compiler
59: rustc_driver::monitor::{{closure}}
60: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
61: rustc_driver::run
62: rustc_driver::main
63: std::rt::lang_start::{{closure}}
64: std::panicking::try::do_call
at src/libstd/rt.rs:59
at src/libstd/panicking.rs:310
65: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
66: std::rt::lang_start_internal
at src/libstd/panicking.rs:289
at src/libstd/panic.rs:398
at src/libstd/rt.rs:58
67: main
68: __libc_start_main
69: <unknown>
query stack during panic:
#0 [const_eval_raw] const-evaluating `<S<T>>::ID`
--> associated_generic.rs:6:9
|
6 | Self::ID(self)
| ^^^^^^^^^^^^^^
#1 [optimized_mir] processing `<S<T>>::id`
#2 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.32.0 (9fda7c223 2019-01-16) running on x86_64-unknown-linux-gnu
``` | 1.0 | Compiler panic with associated constant - The compiler panics when I use a generic associated constant from within the impl.
I was able to reproduce it with this code:
```rust
struct S<T>(T);
impl<T> S<T> {
const ID: fn(&S<T>) -> &S<T> = |s| s;
pub fn id(&self) -> &Self {
Self::ID(self)
}
}
fn main() {
let s = S(10u32);
assert!(S::<u32>::ID(&s).0 == 10); // Works fine
assert!(s.id().0 == 10); // Causes compiler to panic
}
```
## Meta
`rustc --version --verbose`:
rustc 1.32.0 (9fda7c223 2019-01-16)
binary: rustc
commit-hash: 9fda7c2237db910e41d6a712e9a2139b352e558b
commit-date: 2019-01-16
host: x86_64-unknown-linux-gnu
release: 1.32.0
LLVM version: 8.0
Backtrace:
```
thread 'main' panicked at 'assertion failed: !value.needs_subst()', src/librustc/traits/query/normalize_erasing_regions.rs:69:9
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:211
3: std::panicking::default_hook
at src/libstd/panicking.rs:227
4: rustc::util::common::panic_hook
5: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:495
6: std::panicking::begin_panic
7: rustc::traits::query::normalize_erasing_regions::<impl rustc::ty::context::TyCtxt<'cx, 'tcx, 'tcx>>::normalize_erasing_late_bound_regions
8: rustc::ty::instance::Instance::resolve_closure
9: rustc_mir::interpret::cast::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::cast
10: rustc_mir::interpret::step::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::run
11: rustc_mir::const_eval::eval_body_using_ecx
12: rustc_mir::const_eval::const_eval_raw_provider
13: rustc::ty::query::__query_compute::const_eval_raw
14: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::const_eval_raw<'tcx>>::compute
15: rustc::dep_graph::graph::DepGraph::with_task_impl
16: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
17: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
18: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query
19: rustc::ty::query::TyCtxtAt::const_eval_raw
20: <rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::const_eval_raw
21: rustc_mir::interpret::operand::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::const_value_to_op
22: rustc_mir::const_eval::const_to_op
23: rustc_mir::transform::const_prop::ConstPropagator::eval_constant
24: <rustc_mir::transform::const_prop::ConstPropagator<'b, 'a, 'tcx> as rustc::mir::visit::Visitor<'tcx>>::visit_terminator_kind
25: <rustc_mir::transform::const_prop::ConstProp as rustc_mir::transform::MirPass>::run_pass
26: rustc_mir::transform::run_passes::{{closure}}
27: rustc_mir::transform::run_passes
28: rustc_mir::transform::optimized_mir
29: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::optimized_mir<'tcx>>::compute
30: rustc::dep_graph::graph::DepGraph::with_task_impl
31: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
32: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
33: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::try_get_query
34: rustc::ty::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::instance_mir
35: rustc_mir::monomorphize::collector::collect_items_rec
36: rustc_mir::monomorphize::collector::collect_items_rec
37: rustc_mir::monomorphize::collector::collect_crate_mono_items::{{closure}}
38: rustc::util::common::time
39: rustc_mir::monomorphize::collector::collect_crate_mono_items
40: rustc::util::common::time
41: rustc_mir::monomorphize::partitioning::collect_and_partition_mono_items
42: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::collect_and_partition_mono_items<'tcx>>::compute
43: rustc::dep_graph::graph::DepGraph::with_task_impl
44: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
45: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
46: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query
47: rustc_codegen_ssa::base::codegen_crate
48: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_utils::codegen_backend::CodegenBackend>::codegen_crate
49: rustc::util::common::time
50: rustc_driver::driver::phase_4_codegen
51: rustc_driver::driver::compile_input::{{closure}}
52: rustc::ty::context::tls::enter_context
53: <std::thread::local::LocalKey<T>>::with
54: rustc::ty::context::TyCtxt::create_and_enter
55: rustc_driver::driver::compile_input
56: rustc_driver::run_compiler_with_pool
57: <scoped_tls::ScopedKey<T>>::set
58: rustc_driver::run_compiler
59: rustc_driver::monitor::{{closure}}
60: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
61: rustc_driver::run
62: rustc_driver::main
63: std::rt::lang_start::{{closure}}
64: std::panicking::try::do_call
at src/libstd/rt.rs:59
at src/libstd/panicking.rs:310
65: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
66: std::rt::lang_start_internal
at src/libstd/panicking.rs:289
at src/libstd/panic.rs:398
at src/libstd/rt.rs:58
67: main
68: __libc_start_main
69: <unknown>
query stack during panic:
#0 [const_eval_raw] const-evaluating `<S<T>>::ID`
--> associated_generic.rs:6:9
|
6 | Self::ID(self)
| ^^^^^^^^^^^^^^
#1 [optimized_mir] processing `<S<T>>::id`
#2 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.32.0 (9fda7c223 2019-01-16) running on x86_64-unknown-linux-gnu
``` | test | compiler panic with associated constant the compiler panics when i use a generic associated constant from within the impl i was able to reproduce it with this code rust struct s t impl s const id fn s s s s pub fn id self self self id self fn main let s s assert s id s works fine assert s id causes compiler to panic meta rustc version verbose rustc binary rustc commit hash commit date host unknown linux gnu release llvm version backtrace thread main panicked at assertion failed value needs subst src librustc traits query normalize erasing regions rs stack backtrace std sys unix backtrace tracing imp unwind backtrace at src libstd sys unix backtrace tracing gcc s rs std sys common backtrace print at src libstd sys common backtrace rs std panicking default hook closure at src libstd sys common backtrace rs at src libstd panicking rs std panicking default hook at src libstd panicking rs rustc util common panic hook std panicking rust panic with hook at src libstd panicking rs std panicking begin panic rustc traits query normalize erasing regions normalize erasing late bound regions rustc ty instance instance resolve closure rustc mir interpret cast cast rustc mir interpret step run rustc mir const eval eval body using ecx rustc mir const eval const eval raw provider rustc ty query query compute const eval raw rustc ty query for rustc ty query queries const eval raw compute rustc dep graph graph depgraph with task impl start rustc ty query plumbing force query with job rustc ty query plumbing get query rustc ty query tyctxtat const eval raw const eval raw rustc mir interpret operand const value to op rustc mir const eval const to op rustc mir transform const prop constpropagator eval constant as rustc mir visit visitor visit terminator kind run pass rustc mir transform run passes closure rustc mir transform run passes rustc mir transform optimized mir rustc ty query for rustc ty query queries optimized mir compute rustc dep graph graph depgraph with task impl start rustc ty query plumbing force query with job rustc ty query plumbing try get query rustc ty instance mir rustc mir monomorphize collector collect items rec rustc mir monomorphize collector collect items rec rustc mir monomorphize collector collect crate mono items closure rustc util common time rustc mir monomorphize collector collect crate mono items rustc util common time rustc mir monomorphize partitioning collect and partition mono items rustc ty query for rustc ty query queries collect and partition mono items compute rustc dep graph graph depgraph with task impl start rustc ty query plumbing force query with job rustc ty query plumbing get query rustc codegen ssa base codegen crate codegen crate rustc util common time rustc driver driver phase codegen rustc driver driver compile input closure rustc ty context tls enter context with rustc ty context tyctxt create and enter rustc driver driver compile input rustc driver run compiler with pool set rustc driver run compiler rustc driver monitor closure rust maybe catch panic at src libpanic unwind lib rs rustc driver run rustc driver main std rt lang start closure std panicking try do call at src libstd rt rs at src libstd panicking rs rust maybe catch panic at src libpanic unwind lib rs std rt lang start internal at src libstd panicking rs at src libstd panic rs at src libstd rt rs main libc start main query stack during panic const evaluating id associated generic rs self id self processing id collect and partition mono items end of query stack error internal compiler error unexpected panic note the compiler unexpectedly panicked this is a bug note we would appreciate a bug report note rustc running on unknown linux gnu | 1 |
69,246 | 7,127,276,161 | IssuesEvent | 2018-01-20 19:50:25 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | server: TestNodeJoin failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/62fae528ceec1f4c3d3470f2a1e2bb9fad3b5fd8
Parameters:
```
TAGS=
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=357114&tab=buildLog
```
E170921 06:37:32.587741 35141 storage/queue.go:656 [replicate,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.604595 35141 storage/queue.go:656 [replicate,s1,r2/1:/System/{-NodeLive…}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.613026 35141 storage/queue.go:656 [replicate,s1,r3/1:/System/NodeLiveness{-Max}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.645866 35141 storage/queue.go:656 [replicate,s1,r4/1:/System/{NodeLive…-tsd}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.658839 35141 storage/queue.go:656 [replicate,s1,r5/1:/System/ts{d-e}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.671830 35141 storage/queue.go:656 [replicate,s1,r6/1:/{System/tse-Table/System…}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.452912 35140 storage/queue.go:656 [split,s1,r7/1:/{Table/System…-Max}] unable to split [n1,s1,r7/1:/{Table/System…-Max}] at key "/Table/11": split at key /Table/11 failed: storage/store.go:2454: rejecting command with timestamp in the future: 1505975852674831651 (222.285898ms ahead)
node_test.go:163: had 7 ranges at startup, expected 11
``` | 1.0 | server: TestNodeJoin failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/62fae528ceec1f4c3d3470f2a1e2bb9fad3b5fd8
Parameters:
```
TAGS=
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=357114&tab=buildLog
```
E170921 06:37:32.587741 35141 storage/queue.go:656 [replicate,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.604595 35141 storage/queue.go:656 [replicate,s1,r2/1:/System/{-NodeLive…}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.613026 35141 storage/queue.go:656 [replicate,s1,r3/1:/System/NodeLiveness{-Max}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.645866 35141 storage/queue.go:656 [replicate,s1,r4/1:/System/{NodeLive…-tsd}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.658839 35141 storage/queue.go:656 [replicate,s1,r5/1:/System/ts{d-e}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.671830 35141 storage/queue.go:656 [replicate,s1,r6/1:/{System/tse-Table/System…}] range requires a replication change, but lacks a quorum of live replicas (0/1)
E170921 06:37:32.452912 35140 storage/queue.go:656 [split,s1,r7/1:/{Table/System…-Max}] unable to split [n1,s1,r7/1:/{Table/System…-Max}] at key "/Table/11": split at key /Table/11 failed: storage/store.go:2454: rejecting command with timestamp in the future: 1505975852674831651 (222.285898ms ahead)
node_test.go:163: had 7 ranges at startup, expected 11
``` | test | server testnodejoin failed under stress sha parameters tags goflags stress build found a failed test storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go range requires a replication change but lacks a quorum of live replicas storage queue go unable to split at key table split at key table failed storage store go rejecting command with timestamp in the future ahead node test go had ranges at startup expected | 1 |
238,539 | 7,780,575,116 | IssuesEvent | 2018-06-05 20:30:08 | googleapis/nodejs-common-grpc | https://api.github.com/repos/googleapis/nodejs-common-grpc | closed | 404 Not Found: protobufjs@https://registry.npmjs.org/protobufjs/-/protobufjs-5.0.3.tgz | priority: p1 status: blocked | Thanks for stopping by to let us know something could be better!
Please run down the following list and make sure you've tried the usual "quick
fixes":
- Search the issues already opened: https://github.com/googleapis/nodejs-common-grpc/issues
- Search StackOverflow: http://stackoverflow.com/questions/tagged/google-cloud-platform+node.js
- Check our Troubleshooting guide: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/troubleshooting
- Check our FAQ: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/faq
If you are still having issues, please be sure to include as much information as
possible:
#### Environment details
- OS: Ubuntu 16.04
- Node.js version: 9.x
- npm version: 5.x
- @google-cloud/common-grpc version: 0.6.1
#### Steps to reproduce
1. 404 Not Found: protobufjs@https://registry.npmjs.org/protobufjs/-/protobufjs-5.0.3.tgz
Following these steps will guarantee the quickest resolution possible.
Thanks!
| 1.0 | 404 Not Found: protobufjs@https://registry.npmjs.org/protobufjs/-/protobufjs-5.0.3.tgz - Thanks for stopping by to let us know something could be better!
Please run down the following list and make sure you've tried the usual "quick
fixes":
- Search the issues already opened: https://github.com/googleapis/nodejs-common-grpc/issues
- Search StackOverflow: http://stackoverflow.com/questions/tagged/google-cloud-platform+node.js
- Check our Troubleshooting guide: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/troubleshooting
- Check our FAQ: https://googlecloudplatform.github.io/google-cloud-node/#/docs/guides/faq
If you are still having issues, please be sure to include as much information as
possible:
#### Environment details
- OS: Ubuntu 16.04
- Node.js version: 9.x
- npm version: 5.x
- @google-cloud/common-grpc version: 0.6.1
#### Steps to reproduce
1. 404 Not Found: protobufjs@https://registry.npmjs.org/protobufjs/-/protobufjs-5.0.3.tgz
Following these steps will guarantee the quickest resolution possible.
Thanks!
| non_test | not found protobufjs thanks for stopping by to let us know something could be better please run down the following list and make sure you ve tried the usual quick fixes search the issues already opened search stackoverflow check our troubleshooting guide check our faq if you are still having issues please be sure to include as much information as possible environment details os ubuntu node js version x npm version x google cloud common grpc version steps to reproduce not found protobufjs following these steps will guarantee the quickest resolution possible thanks | 0 |
67,178 | 12,886,251,815 | IssuesEvent | 2020-07-13 09:12:01 | microsoft/azure-tools-for-java | https://api.github.com/repos/microsoft/azure-tools-for-java | closed | Telemetry consistent for azure toolkit | Code Refactor Internal Test internal | This issue is for telemetry consistent, we will use the same schema as vscode.
The full event name = ${platformName} + "/" + {eventtype}
- platformName: {"AzurePlugin.Eclipse", "AzurePlugin.Intellij"}
- eventtype: {opStart, opEnd, step, info, warn, error}
Example: AzurePlugin.Eclipse/opEnd, AzurePlugin.Intellij/info
We also define the service name and operation name.
Service name and operation name will put in properties.
The typical service name is: webapp, docker, hdinsight....
The typical operation name is: create-webapp...
| 1.0 | Telemetry consistent for azure toolkit - This issue is for telemetry consistent, we will use the same schema as vscode.
The full event name = ${platformName} + "/" + {eventtype}
- platformName: {"AzurePlugin.Eclipse", "AzurePlugin.Intellij"}
- eventtype: {opStart, opEnd, step, info, warn, error}
Example: AzurePlugin.Eclipse/opEnd, AzurePlugin.Intellij/info
We also define the service name and operation name.
Service name and operation name will put in properties.
The typical service name is: webapp, docker, hdinsight....
The typical operation name is: create-webapp...
| non_test | telemetry consistent for azure toolkit this issue is for telemetry consistent we will use the same schema as vscode the full event name platformname eventtype platformname azureplugin eclipse azureplugin intellij eventtype opstart opend step info warn error example azureplugin eclipse opend azureplugin intellij info we also define the service name and operation name service name and operation name will put in properties the typical service name is webapp docker hdinsight the typical operation name is create webapp | 0 |
220,861 | 17,265,243,059 | IssuesEvent | 2021-07-22 13:05:29 | Kong/kubernetes-ingress-controller | https://api.github.com/repos/Kong/kubernetes-ingress-controller | closed | Test KIC against multiple k8s versions | area/ci area/testing | Problem statement: We need to commit to some quality level for versions of k8s that we consider "supported".
Proposed solution: Define a "supported" major.minor k8s version to be one, for which at least one patch version is covered by the automated integration tests. We can leverage KTF support for environments for that purpose.
Separated out from #1095
Acceptance criteria:
- [ ] For an array of `major.minor.patch` or `major.minor.<latest-patch>` versions, CI tests run against each | 1.0 | Test KIC against multiple k8s versions - Problem statement: We need to commit to some quality level for versions of k8s that we consider "supported".
Proposed solution: Define a "supported" major.minor k8s version to be one, for which at least one patch version is covered by the automated integration tests. We can leverage KTF support for environments for that purpose.
Separated out from #1095
Acceptance criteria:
- [ ] For an array of `major.minor.patch` or `major.minor.<latest-patch>` versions, CI tests run against each | test | test kic against multiple versions problem statement we need to commit to some quality level for versions of that we consider supported proposed solution define a supported major minor version to be one for which at least one patch version is covered by the automated integration tests we can leverage ktf support for environments for that purpose separated out from acceptance criteria for an array of major minor patch or major minor versions ci tests run against each | 1 |
161,802 | 12,566,603,498 | IssuesEvent | 2020-06-08 11:31:25 | sugarlabs/musicblocks | https://api.github.com/repos/sugarlabs/musicblocks | reopened | add staff display to grid cycle | Issue-Enhancement WF6-Needs testing | In addition to coordinate grids, it would be nice to display a staff.
- [ ] blank staff
- [x] treble clef
- [x] grand clef
- [x] mezzo-soprano clef
- [x] alto clef
- [x] tenor clef
- [x] bass staff
@pikurasa what do you recommend for the labels for these options? | 1.0 | add staff display to grid cycle - In addition to coordinate grids, it would be nice to display a staff.
- [ ] blank staff
- [x] treble clef
- [x] grand clef
- [x] mezzo-soprano clef
- [x] alto clef
- [x] tenor clef
- [x] bass staff
@pikurasa what do you recommend for the labels for these options? | test | add staff display to grid cycle in addition to coordinate grids it would be nice to display a staff blank staff treble clef grand clef mezzo soprano clef alto clef tenor clef bass staff pikurasa what do you recommend for the labels for these options | 1 |
558,292 | 16,528,950,723 | IssuesEvent | 2021-05-27 01:32:33 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | gf-client-module.jar isn't an OSGi module but it is in glassfish/modules | Component: standalone_client Priority: Major Stale Type: Bug devx_web | Every JAR file in the glassfish/modules directory adds to the overhead of server startup. So JAR files that are not OSGi modules should not be in that directory.
This issue is for moving gf-client-module.jar out of glassfish/modules, possibly to the glassfish/lib/appclient directory.
It is unclear how much of a performance gain would result from this. Since this bundle is in the "Installed" state, it isn't actually loaded into the server. So this is a low priority item as far as startup performance time improvement goes.
#### Affected Versions
[4.0_dev] | 1.0 | gf-client-module.jar isn't an OSGi module but it is in glassfish/modules - Every JAR file in the glassfish/modules directory adds to the overhead of server startup. So JAR files that are not OSGi modules should not be in that directory.
This issue is for moving gf-client-module.jar out of glassfish/modules, possibly to the glassfish/lib/appclient directory.
It is unclear how much of a performance gain would result from this. Since this bundle is in the "Installed" state, it isn't actually loaded into the server. So this is a low priority item as far as startup performance time improvement goes.
#### Affected Versions
[4.0_dev] | non_test | gf client module jar isn t an osgi module but it is in glassfish modules every jar file in the glassfish modules directory adds to the overhead of server startup so jar files that are not osgi modules should not be in that directory this issue is for moving gf client module jar out of glassfish modules possibly to the glassfish lib appclient directory it is unclear how much of a performance gain would result from this since this bundle is in the installed state it isn t actually loaded into the server so this is a low priority item as far as startup performance time improvement goes affected versions | 0 |
187,531 | 14,428,231,975 | IssuesEvent | 2020-12-06 08:45:40 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | PaulForgey/go-old: src/go/internal/gccgoimporter/gccgoinstallation_test.go; 10 LoC | fresh test tiny |
Found a possible issue in [PaulForgey/go-old](https://www.github.com/PaulForgey/go-old) at [src/go/internal/gccgoimporter/gccgoinstallation_test.go](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/go/internal/gccgoimporter/gccgoinstallation_test.go#L182-L191)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 190 may start a goroutine
[Click here to see the code in its original context.](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/go/internal/gccgoimporter/gccgoinstallation_test.go#L182-L191)
<details>
<summary>Click here to show the 10 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range [...]importerTest{
{pkgpath: "io", name: "Reader", want: "type Reader interface{Read(p []byte) (n int, err error)}"},
{pkgpath: "io", name: "ReadWriter", want: "type ReadWriter interface{Reader; Writer}"},
{pkgpath: "math", name: "Pi", want: "const Pi untyped float"},
{pkgpath: "math", name: "Sin", want: "func Sin(x float64) float64"},
{pkgpath: "sort", name: "Ints", want: "func Ints(a []int)"},
{pkgpath: "unsafe", name: "Pointer", want: "type Pointer"},
} {
runImporterTest(t, imp, nil, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7
| 1.0 | PaulForgey/go-old: src/go/internal/gccgoimporter/gccgoinstallation_test.go; 10 LoC -
Found a possible issue in [PaulForgey/go-old](https://www.github.com/PaulForgey/go-old) at [src/go/internal/gccgoimporter/gccgoinstallation_test.go](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/go/internal/gccgoimporter/gccgoinstallation_test.go#L182-L191)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 190 may start a goroutine
[Click here to see the code in its original context.](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/go/internal/gccgoimporter/gccgoinstallation_test.go#L182-L191)
<details>
<summary>Click here to show the 10 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range [...]importerTest{
{pkgpath: "io", name: "Reader", want: "type Reader interface{Read(p []byte) (n int, err error)}"},
{pkgpath: "io", name: "ReadWriter", want: "type ReadWriter interface{Reader; Writer}"},
{pkgpath: "math", name: "Pi", want: "const Pi untyped float"},
{pkgpath: "math", name: "Sin", want: "func Sin(x float64) float64"},
{pkgpath: "sort", name: "Ints", want: "func Ints(a []int)"},
{pkgpath: "unsafe", name: "Pointer", want: "type Pointer"},
} {
runImporterTest(t, imp, nil, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7
| test | paulforgey go old src go internal gccgoimporter gccgoinstallation test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine click here to show the line s of go which triggered the analyzer go for test range importertest pkgpath io name reader want type reader interface read p byte n int err error pkgpath io name readwriter want type readwriter interface reader writer pkgpath math name pi want const pi untyped float pkgpath math name sin want func sin x pkgpath sort name ints want func ints a int pkgpath unsafe name pointer want type pointer runimportertest t imp nil test leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
430,504 | 30,188,166,229 | IssuesEvent | 2023-07-04 13:30:57 | yannickferenczi/hands-home-helpers | https://api.github.com/repos/yannickferenczi/hands-home-helpers | closed | USER STORY: fulfill the Structure Plane of the README | MUST HAVE documentation | As a **Stakeholder**, I can **read the Structure Plane subsection of the README file** so that **I know how to organize the pages and interactions of the website**.
---
ACCEPTANCE CRITERIA
- [x] A Structure Plane subsection has been created
- [x] It contains relevant information about interaction design and information architecture
---
TASKS
- [x] Fulfill the Structure Plane subsection of the README file
| 1.0 | USER STORY: fulfill the Structure Plane of the README - As a **Stakeholder**, I can **read the Structure Plane subsection of the README file** so that **I know how to organize the pages and interactions of the website**.
---
ACCEPTANCE CRITERIA
- [x] A Structure Plane subsection has been created
- [x] It contains relevant information about interaction design and information architecture
---
TASKS
- [x] Fulfill the Structure Plane subsection of the README file
| non_test | user story fulfill the structure plane of the readme as a stakeholder i can read the structure plane subsection of the readme file so that i know how to organize the pages and interactions of the website acceptance criteria a structure plane subsection has been created it contains relevant information about interaction design and information architecture tasks fulfill the structure plane subsection of the readme file | 0 |
212,332 | 16,442,176,603 | IssuesEvent | 2021-05-20 15:27:09 | eclipse-openj9/openj9 | https://api.github.com/repos/eclipse-openj9/openj9 | opened | jtreg java/util/zip/DeInflate.java inflate failures, length mismatch | test failure | https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_s390x_linux_OpenJDK11/40/
java/util/zip/DeInflate.java
Fails on Ubunut 20 machines ub16-390-1 and 2, passed on a ub18 machine. The failure is not related to the new OpenJDK level.
```
06:20:19 m=525312, n=498076, len=524288, eq=false
06:20:19 STDERR:
06:20:19 java.lang.RuntimeException: De/inflater failed:java.util.zip.Deflater@4b610026
06:20:19 at DeInflate.check(DeInflate.java:141)
06:20:19 at DeInflate.main(DeInflate.java:290)
06:31:28 m=525312, n=498093, len=524288, eq=false
06:31:28 STDERR:
06:31:28 java.lang.RuntimeException: De/inflater failed:java.util.zip.Deflater@5c1f2389
06:31:28 at DeInflate.check(DeInflate.java:141)
06:31:28 at DeInflate.main(DeInflate.java:290)
``` | 1.0 | jtreg java/util/zip/DeInflate.java inflate failures, length mismatch - https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_s390x_linux_OpenJDK11/40/
java/util/zip/DeInflate.java
Fails on Ubunut 20 machines ub16-390-1 and 2, passed on a ub18 machine. The failure is not related to the new OpenJDK level.
```
06:20:19 m=525312, n=498076, len=524288, eq=false
06:20:19 STDERR:
06:20:19 java.lang.RuntimeException: De/inflater failed:java.util.zip.Deflater@4b610026
06:20:19 at DeInflate.check(DeInflate.java:141)
06:20:19 at DeInflate.main(DeInflate.java:290)
06:31:28 m=525312, n=498093, len=524288, eq=false
06:31:28 STDERR:
06:31:28 java.lang.RuntimeException: De/inflater failed:java.util.zip.Deflater@5c1f2389
06:31:28 at DeInflate.check(DeInflate.java:141)
06:31:28 at DeInflate.main(DeInflate.java:290)
``` | test | jtreg java util zip deinflate java inflate failures length mismatch java util zip deinflate java fails on ubunut machines and passed on a machine the failure is not related to the new openjdk level m n len eq false stderr java lang runtimeexception de inflater failed java util zip deflater at deinflate check deinflate java at deinflate main deinflate java m n len eq false stderr java lang runtimeexception de inflater failed java util zip deflater at deinflate check deinflate java at deinflate main deinflate java | 1 |
300,113 | 25,945,916,545 | IssuesEvent | 2022-12-17 00:57:54 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | Pessoa Desenvolvedora Fullstack Sênior - PHP e JS (Remoto) @ Feedz | CLT Sênior PHP JavaScript Remoto Presencial MySQL Testes automatizados Docker Stale | 🚀 A FEEDZ:
Somos uma HRTech e nosso propósito é criar ambientes de trabalho mais felizes, tudo isso através de um software completo de gestão de pessoas humanizado! Por isso, começar pelo nosso próprio ninho é tão importante!
Crescemos mais de 400% em 2021 e continuamos evoluindo em 2022, tudo isso através de um clima animado, com muito desenvolvimento e respeito para todos! (Se quiser conhecer melhor nosso ninho, voa pra cá: https://www.feedz.com.br/vagas/) 💙 Quer mais? Estamos no modelo totalmente remoto, então você pode fazer parte do nosso ninho no conforto da sua casa!!
Estamos buscando uma pessoa para o cargo de Pessoa Desenvolvedora Fullstack Sênior(PHP e JS) que irá atuar no nosso time de Produto, em uma squad multidisciplinar(5 a 8 membros) e trabalhando diretamente com nosso gerente de produto em um time de alta performance, desenvolvendo sistemas que impactam mais de 100 mil usuários diretamente.
No dia a dia você vai ter bastante contato com SlimPHP, Eloquent, PHPUnit, Twig, Vue.js, MySQL, Docker, entre outras. Trabalhamos com projetos ágeis, sempre prezando a comunicação e a colaboração entre os membros do time.
Aqui na Feedz valorizamos muito a diversidade e queremos Parrots que queiram voar conosco, independente da etnia, gênero, sexualidade, nacionalidade, idade ou deficiência! Quer crescer com a gente? Vem pro ninho 💛
📚 RESPONSABILIDADES E ATIVIDADES:
Desenvolvimento backend com PHP (Com conhecimentos no framework Laravel) ;
Desenvolvimento de frontend com CSS, HTML e JS (Com conhecimento em Vue.JS aplicando nosso design system)
Aplicar boas práticas alinhadas com o time no desenvolvimento (criando códigos coerentes e alinhados ao Clean Code; desenvolvendo com foco em segurança, patterns, automação, etc);
Desenvolver novos módulos na plataforma;
Desenvolvimento e aplicação de testes automatizados;
Fazer a gestão da informação das demandas atualizando o status dos projetos.
🥇 PRÉ-REQUISITOS:
A partir de 7 anos de experiência em desenvolvimento web com produtos digitais(preferencialmente SaaS);
Experiência a partir de 3 anos com PHP;
Experiência a partir de 1 ano com Javascript;
Experiência com sistema de alto volume de dados;
Experiência com testes unitários ou de integração;
Estar trabalhando atualmente com as tecnologias de nossa stack;
Experiência com banco de dados relacional (Atualmente usamos MySQL 8);
Conhecimento em boas práticas de código e vivência com times de tecnologia ágeis!
🎯DIFERENCIAIS:
Conhecimento com Slim, Laravel ou e Vue.JS.
✨ BENEFÍCIOS:
100% Home Office (não iremos voltar ao trabalho presencial!!)
Vale Comida (R$700)
Plano de Saúde Unimed(subsidiado pela Feedz)
Plano odontológico Nacional da Bradesco Dental(subsidiado pela Feedz)
Auxílio Parrot Feliz (R$100/mês de investimento no desenvolvimento pessoal, saúde física ou mental do Parrot)
Auxílio Contas (R$160/mês)
Auxílio Parrot Casamenteiro(R$250 quando Parrots casam)
Auxílio Ninhada(R$250 no nascimento ou adoção de pequenos Parrots)
Happy Bird(day) (um dia de folga no mês de aniversário)
Licença paternidade estendida(21 dias)
Licença maternidade estendida(6 meses)
Ifood Office(voucher para usar dentro do app em alguns eventos internos)
Para você ter um voo com conforto, na Feedz oferecemos um auxílio de R$1.000 para materiais, como mesa e cadeira, para seu home office e enviamos todos os equipamentos eletrônicos necessários!
💰 SALÁRIO 7.500 A 9.500 (CLT)*
*A depender do tempo de experiência | 40 horas semanais
📅 ETAPAS DO PROCESSO:
Para essa posição, temos as seguintes etapas para avaliação:
Triagem de currículo;
Bate papo técnico;
Bate papo cultural;
Contratação!
Para entender melhor sobre cada etapa e como funciona nosso processo seletivo, acesse nosso playbook da pessoa candidata: https://bit.ly/3EWnroW
Candidate-se em: https://enliztjob.app.link/9oAwIGlnBub | 1.0 | Pessoa Desenvolvedora Fullstack Sênior - PHP e JS (Remoto) @ Feedz - 🚀 A FEEDZ:
Somos uma HRTech e nosso propósito é criar ambientes de trabalho mais felizes, tudo isso através de um software completo de gestão de pessoas humanizado! Por isso, começar pelo nosso próprio ninho é tão importante!
Crescemos mais de 400% em 2021 e continuamos evoluindo em 2022, tudo isso através de um clima animado, com muito desenvolvimento e respeito para todos! (Se quiser conhecer melhor nosso ninho, voa pra cá: https://www.feedz.com.br/vagas/) 💙 Quer mais? Estamos no modelo totalmente remoto, então você pode fazer parte do nosso ninho no conforto da sua casa!!
Estamos buscando uma pessoa para o cargo de Pessoa Desenvolvedora Fullstack Sênior(PHP e JS) que irá atuar no nosso time de Produto, em uma squad multidisciplinar(5 a 8 membros) e trabalhando diretamente com nosso gerente de produto em um time de alta performance, desenvolvendo sistemas que impactam mais de 100 mil usuários diretamente.
No dia a dia você vai ter bastante contato com SlimPHP, Eloquent, PHPUnit, Twig, Vue.js, MySQL, Docker, entre outras. Trabalhamos com projetos ágeis, sempre prezando a comunicação e a colaboração entre os membros do time.
Aqui na Feedz valorizamos muito a diversidade e queremos Parrots que queiram voar conosco, independente da etnia, gênero, sexualidade, nacionalidade, idade ou deficiência! Quer crescer com a gente? Vem pro ninho 💛
📚 RESPONSABILIDADES E ATIVIDADES:
Desenvolvimento backend com PHP (Com conhecimentos no framework Laravel) ;
Desenvolvimento de frontend com CSS, HTML e JS (Com conhecimento em Vue.JS aplicando nosso design system)
Aplicar boas práticas alinhadas com o time no desenvolvimento (criando códigos coerentes e alinhados ao Clean Code; desenvolvendo com foco em segurança, patterns, automação, etc);
Desenvolver novos módulos na plataforma;
Desenvolvimento e aplicação de testes automatizados;
Fazer a gestão da informação das demandas atualizando o status dos projetos.
🥇 PRÉ-REQUISITOS:
A partir de 7 anos de experiência em desenvolvimento web com produtos digitais(preferencialmente SaaS);
Experiência a partir de 3 anos com PHP;
Experiência a partir de 1 ano com Javascript;
Experiência com sistema de alto volume de dados;
Experiência com testes unitários ou de integração;
Estar trabalhando atualmente com as tecnologias de nossa stack;
Experiência com banco de dados relacional (Atualmente usamos MySQL 8);
Conhecimento em boas práticas de código e vivência com times de tecnologia ágeis!
🎯DIFERENCIAIS:
Conhecimento com Slim, Laravel ou e Vue.JS.
✨ BENEFÍCIOS:
100% Home Office (não iremos voltar ao trabalho presencial!!)
Vale Comida (R$700)
Plano de Saúde Unimed(subsidiado pela Feedz)
Plano odontológico Nacional da Bradesco Dental(subsidiado pela Feedz)
Auxílio Parrot Feliz (R$100/mês de investimento no desenvolvimento pessoal, saúde física ou mental do Parrot)
Auxílio Contas (R$160/mês)
Auxílio Parrot Casamenteiro(R$250 quando Parrots casam)
Auxílio Ninhada(R$250 no nascimento ou adoção de pequenos Parrots)
Happy Bird(day) (um dia de folga no mês de aniversário)
Licença paternidade estendida(21 dias)
Licença maternidade estendida(6 meses)
Ifood Office(voucher para usar dentro do app em alguns eventos internos)
Para você ter um voo com conforto, na Feedz oferecemos um auxílio de R$1.000 para materiais, como mesa e cadeira, para seu home office e enviamos todos os equipamentos eletrônicos necessários!
💰 SALÁRIO 7.500 A 9.500 (CLT)*
*A depender do tempo de experiência | 40 horas semanais
📅 ETAPAS DO PROCESSO:
Para essa posição, temos as seguintes etapas para avaliação:
Triagem de currículo;
Bate papo técnico;
Bate papo cultural;
Contratação!
Para entender melhor sobre cada etapa e como funciona nosso processo seletivo, acesse nosso playbook da pessoa candidata: https://bit.ly/3EWnroW
Candidate-se em: https://enliztjob.app.link/9oAwIGlnBub | test | pessoa desenvolvedora fullstack sênior php e js remoto feedz 🚀 a feedz somos uma hrtech e nosso propósito é criar ambientes de trabalho mais felizes tudo isso através de um software completo de gestão de pessoas humanizado por isso começar pelo nosso próprio ninho é tão importante crescemos mais de em e continuamos evoluindo em tudo isso através de um clima animado com muito desenvolvimento e respeito para todos se quiser conhecer melhor nosso ninho voa pra cá 💙 quer mais estamos no modelo totalmente remoto então você pode fazer parte do nosso ninho no conforto da sua casa estamos buscando uma pessoa para o cargo de pessoa desenvolvedora fullstack sênior php e js que irá atuar no nosso time de produto em uma squad multidisciplinar a membros e trabalhando diretamente com nosso gerente de produto em um time de alta performance desenvolvendo sistemas que impactam mais de mil usuários diretamente no dia a dia você vai ter bastante contato com slimphp eloquent phpunit twig vue js mysql docker entre outras trabalhamos com projetos ágeis sempre prezando a comunicação e a colaboração entre os membros do time aqui na feedz valorizamos muito a diversidade e queremos parrots que queiram voar conosco independente da etnia gênero sexualidade nacionalidade idade ou deficiência quer crescer com a gente vem pro ninho 💛 📚 responsabilidades e atividades desenvolvimento backend com php com conhecimentos no framework laravel desenvolvimento de frontend com css html e js com conhecimento em vue js aplicando nosso design system aplicar boas práticas alinhadas com o time no desenvolvimento criando códigos coerentes e alinhados ao clean code desenvolvendo com foco em segurança patterns automação etc desenvolver novos módulos na plataforma desenvolvimento e aplicação de testes automatizados fazer a gestão da informação das demandas atualizando o status dos projetos 🥇 pré requisitos a partir de anos de experiência em desenvolvimento web com produtos digitais preferencialmente saas experiência a partir de anos com php experiência a partir de ano com javascript experiência com sistema de alto volume de dados experiência com testes unitários ou de integração estar trabalhando atualmente com as tecnologias de nossa stack experiência com banco de dados relacional atualmente usamos mysql conhecimento em boas práticas de código e vivência com times de tecnologia ágeis 🎯diferenciais conhecimento com slim laravel ou e vue js ✨ benefícios home office não iremos voltar ao trabalho presencial vale comida r plano de saúde unimed subsidiado pela feedz plano odontológico nacional da bradesco dental subsidiado pela feedz auxílio parrot feliz r mês de investimento no desenvolvimento pessoal saúde física ou mental do parrot auxílio contas r mês auxílio parrot casamenteiro r quando parrots casam auxílio ninhada r no nascimento ou adoção de pequenos parrots happy bird day um dia de folga no mês de aniversário licença paternidade estendida dias licença maternidade estendida meses ifood office voucher para usar dentro do app em alguns eventos internos para você ter um voo com conforto na feedz oferecemos um auxílio de r para materiais como mesa e cadeira para seu home office e enviamos todos os equipamentos eletrônicos necessários 💰 salário a clt a depender do tempo de experiência horas semanais 📅 etapas do processo para essa posição temos as seguintes etapas para avaliação triagem de currículo bate papo técnico bate papo cultural contratação para entender melhor sobre cada etapa e como funciona nosso processo seletivo acesse nosso playbook da pessoa candidata candidate se em | 1 |
221,067 | 17,288,538,759 | IssuesEvent | 2021-07-24 08:01:33 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [Battleground] Looting/skinning dead players | Fixed in Dev - Confirmed by tester | ### **Looting/skinning dead players**
Checked so far (❌ do crash / ✔️ dont crash)
1. Bloodelf males skinning human females ✔️
2. Bloodelf females skinning human females ✔️
3. human females skinning Bloodelf female ❌
4. Bloodelf males skinning Human males ❌
5. Worgen female skinning bloodelf male ✔️
6. bloodelf male skinning Worgen female ✔️
7. Worgen female skinning Bloodelf female ✔️
8. Nightelf Female skinning a Bloodelf female ❌
9. Bloodelf female skinning a Nightelf female ❌
10. Gnome Female skinning Bloodelf female ❌
11. Bloodelf female skinning Gnome Female ❌
**Note:**
For now i think its best to disable the skinning of dead players for this week live update,
then once the other updates have gone live we can enable it again to continue testing until we can get this issue fixed after full test of all races/genders as this may take a while to complete the testing
And as we are removing Alterac Valley from the RBG que for now this funtion isnt actually needed right at this moment for the AV Quests items https://cata-twinhead.twinstar.cz/?search=Armor+Scraps#items / https://cata-twinhead.twinstar.cz/?search=Armor+Scraps#quests
| 1.0 | [Battleground] Looting/skinning dead players - ### **Looting/skinning dead players**
Checked so far (❌ do crash / ✔️ dont crash)
1. Bloodelf males skinning human females ✔️
2. Bloodelf females skinning human females ✔️
3. human females skinning Bloodelf female ❌
4. Bloodelf males skinning Human males ❌
5. Worgen female skinning bloodelf male ✔️
6. bloodelf male skinning Worgen female ✔️
7. Worgen female skinning Bloodelf female ✔️
8. Nightelf Female skinning a Bloodelf female ❌
9. Bloodelf female skinning a Nightelf female ❌
10. Gnome Female skinning Bloodelf female ❌
11. Bloodelf female skinning Gnome Female ❌
**Note:**
For now i think its best to disable the skinning of dead players for this week live update,
then once the other updates have gone live we can enable it again to continue testing until we can get this issue fixed after full test of all races/genders as this may take a while to complete the testing
And as we are removing Alterac Valley from the RBG que for now this funtion isnt actually needed right at this moment for the AV Quests items https://cata-twinhead.twinstar.cz/?search=Armor+Scraps#items / https://cata-twinhead.twinstar.cz/?search=Armor+Scraps#quests
| test | looting skinning dead players looting skinning dead players checked so far ❌ do crash ✔️ dont crash bloodelf males skinning human females ✔️ bloodelf females skinning human females ✔️ human females skinning bloodelf female ❌ bloodelf males skinning human males ❌ worgen female skinning bloodelf male ✔️ bloodelf male skinning worgen female ✔️ worgen female skinning bloodelf female ✔️ nightelf female skinning a bloodelf female ❌ bloodelf female skinning a nightelf female ❌ gnome female skinning bloodelf female ❌ bloodelf female skinning gnome female ❌ note for now i think its best to disable the skinning of dead players for this week live update then once the other updates have gone live we can enable it again to continue testing until we can get this issue fixed after full test of all races genders as this may take a while to complete the testing and as we are removing alterac valley from the rbg que for now this funtion isnt actually needed right at this moment for the av quests items | 1 |
26,874 | 11,410,704,696 | IssuesEvent | 2020-02-01 00:31:17 | emilwareus/generator-jhipster | https://api.github.com/repos/emilwareus/generator-jhipster | opened | CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz | security vulnerability | ## CVE-2019-15657 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary>
<p>Utilities for ESLint plugins.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/generator-jhipster/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/generator-jhipster/node_modules/eslint-utils/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.14.1.tgz (Root Library)
- :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/generator-jhipster/commit/06704da71c2ceab47b6ffbdbd9a47cc75f234f08">06704da71c2ceab47b6ffbdbd9a47cc75f234f08</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code.
<p>Publish Date: 2019-08-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p>
<p>Release Date: 2019-08-26</p>
<p>Fix Resolution: 1.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz - ## CVE-2019-15657 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary>
<p>Utilities for ESLint plugins.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/generator-jhipster/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/generator-jhipster/node_modules/eslint-utils/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.14.1.tgz (Root Library)
- :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/generator-jhipster/commit/06704da71c2ceab47b6ffbdbd9a47cc75f234f08">06704da71c2ceab47b6ffbdbd9a47cc75f234f08</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code.
<p>Publish Date: 2019-08-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p>
<p>Release Date: 2019-08-26</p>
<p>Fix Resolution: 1.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in eslint utils tgz cve high severity vulnerability vulnerable library eslint utils tgz utilities for eslint plugins library home page a href path to dependency file tmp ws scm generator jhipster package json path to vulnerable library tmp ws scm generator jhipster node modules eslint utils package json dependency hierarchy eslint tgz root library x eslint utils tgz vulnerable library found in head commit a href vulnerability details in eslint utils before the getstaticvalue function can execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
45,933 | 5,766,079,245 | IssuesEvent | 2017-04-27 05:41:22 | reactioncommerce/reaction | https://api.github.com/repos/reactioncommerce/reaction | closed | Create a helper method to grab element IDs | backlog testing wontfix | Currently the acceptance tests are using a lot of xpath. Since xpath is super fragile at our development pace I would like to find other means for capturing elements more dynamically.
Perhaps I can find a way to use regex to grep the page for element IDs and use those. While ridding all the xpath from element map might not be plausible, I would like to remove as many as I can.
| 1.0 | Create a helper method to grab element IDs - Currently the acceptance tests are using a lot of xpath. Since xpath is super fragile at our development pace I would like to find other means for capturing elements more dynamically.
Perhaps I can find a way to use regex to grep the page for element IDs and use those. While ridding all the xpath from element map might not be plausible, I would like to remove as many as I can.
| test | create a helper method to grab element ids currently the acceptance tests are using a lot of xpath since xpath is super fragile at our development pace i would like to find other means for capturing elements more dynamically perhaps i can find a way to use regex to grep the page for element ids and use those while ridding all the xpath from element map might not be plausible i would like to remove as many as i can | 1 |
2,505 | 5,238,993,129 | IssuesEvent | 2017-01-31 08:08:21 | vimperator/vimperator-labs | https://api.github.com/repos/vimperator/vimperator-labs | closed | Vimperator don't work in Firefox Beta. Not e10s. | compatibility | ##### Issue type:
<!-- Pick one and delete the rest -->
- Version compatibility
##### Version:
```
Vimperator 3.15, Nightly 51.0a2, e10s disabled.
```
##### Description:
Mappings and commandline don't work.
In browser console it's showing that `ReferenceError: tabs is not defined`
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Not_defined
| True | Vimperator don't work in Firefox Beta. Not e10s. - ##### Issue type:
<!-- Pick one and delete the rest -->
- Version compatibility
##### Version:
```
Vimperator 3.15, Nightly 51.0a2, e10s disabled.
```
##### Description:
Mappings and commandline don't work.
In browser console it's showing that `ReferenceError: tabs is not defined`
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Not_defined
| non_test | vimperator don t work in firefox beta not issue type version compatibility version vimperator nightly disabled description mappings and commandline don t work in browser console it s showing that referenceerror tabs is not defined | 0 |
290,211 | 25,042,901,612 | IssuesEvent | 2022-11-04 23:38:13 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [test-triage] chip_csr_bit_bash | Component:TestTriage | ### Hierarchy of regression failure
Chip Level
### Failure Description
Job chip_earlgrey_asic-sim-vcs_run_cover_reg_top killed due to: Exit reason: User job exceeded runlimit: User job timed out has 1 failures:
Test chip_csr_bit_bash has 1 failures.
2.chip_csr_bit_bash.680983478
Log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/2.chip_csr_bit_bash/latest/run.log
Job ID: smart:dca360fb-4df9-4345-8c1d-40060dbab0a5
### Steps to Reproduce
- GitHub Revision: [53874025b](https://github.com/lowrisc/opentitan/tree/53874025b840788d5e66ffddae8fe7840a9cf4df)
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_csr_bit_bash --fixed-seed 680983478 --build-seed 137638316 --waves -v h
### Tests with similar or related failures
_No response_ | 1.0 | [test-triage] chip_csr_bit_bash - ### Hierarchy of regression failure
Chip Level
### Failure Description
Job chip_earlgrey_asic-sim-vcs_run_cover_reg_top killed due to: Exit reason: User job exceeded runlimit: User job timed out has 1 failures:
Test chip_csr_bit_bash has 1 failures.
2.chip_csr_bit_bash.680983478
Log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/2.chip_csr_bit_bash/latest/run.log
Job ID: smart:dca360fb-4df9-4345-8c1d-40060dbab0a5
### Steps to Reproduce
- GitHub Revision: [53874025b](https://github.com/lowrisc/opentitan/tree/53874025b840788d5e66ffddae8fe7840a9cf4df)
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_csr_bit_bash --fixed-seed 680983478 --build-seed 137638316 --waves -v h
### Tests with similar or related failures
_No response_ | test | chip csr bit bash hierarchy of regression failure chip level failure description job chip earlgrey asic sim vcs run cover reg top killed due to exit reason user job exceeded runlimit user job timed out has failures test chip csr bit bash has failures chip csr bit bash log container opentitan public scratch os regression chip earlgrey asic sim vcs chip csr bit bash latest run log job id smart steps to reproduce github revision dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i chip csr bit bash fixed seed build seed waves v h tests with similar or related failures no response | 1 |
310,044 | 26,696,237,950 | IssuesEvent | 2023-01-27 10:35:16 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Invalid error message for E0277 | C-enhancement A-diagnostics A-traits E-needs-test T-compiler | When trying to use const-fn at trait signature, e.g via function return array of fixed size:
```rust
trait T {
fn f() -> [u8; std::mem::size_of::<Self>()];
}
```
Compiler seem to produce in some terms incorrect error:
```
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:3:20
|
3 | fn f() -> [u8; std::mem::size_of::<Self>()]
| ^^^^^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
|
= help: the trait `std::marker::Sized` is not implemented for `Self`
= note: to learn more, visit <https://doc.rust-lang.org/book/second-edition/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
= help: consider adding a `where Self: std::marker::Sized` bound
= note: required by `std::mem::size_of`
error: aborting due to previous error
```
And in fact, adding trait bound [won't help](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=3a8730055aae42a36349579e278bdb1e) and will lead to the same error. Probably that should lead to: "Const generics are not supported", or might something more precise. | 1.0 | Invalid error message for E0277 - When trying to use const-fn at trait signature, e.g via function return array of fixed size:
```rust
trait T {
fn f() -> [u8; std::mem::size_of::<Self>()];
}
```
Compiler seem to produce in some terms incorrect error:
```
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:3:20
|
3 | fn f() -> [u8; std::mem::size_of::<Self>()]
| ^^^^^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
|
= help: the trait `std::marker::Sized` is not implemented for `Self`
= note: to learn more, visit <https://doc.rust-lang.org/book/second-edition/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
= help: consider adding a `where Self: std::marker::Sized` bound
= note: required by `std::mem::size_of`
error: aborting due to previous error
```
And in fact, adding trait bound [won't help](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=3a8730055aae42a36349579e278bdb1e) and will lead to the same error. Probably that should lead to: "Const generics are not supported", or might something more precise. | test | invalid error message for when trying to use const fn at trait signature e g via function return array of fixed size rust trait t fn f compiler seem to produce in some terms incorrect error error the size for values of type self cannot be known at compilation time src main rs fn f doesn t have a size known at compile time help the trait std marker sized is not implemented for self note to learn more visit help consider adding a where self std marker sized bound note required by std mem size of error aborting due to previous error and in fact adding trait bound and will lead to the same error probably that should lead to const generics are not supported or might something more precise | 1 |
164,329 | 12,798,026,973 | IssuesEvent | 2020-07-02 13:18:29 | swagger-api/swagger-js | https://api.github.com/repos/swagger-api/swagger-js | closed | Outdated build targets in .travis.yml | cat: testing type: enhancement | I would like to propse a update to the used node js versions.
Here is what travis supports:
node latest stable Node.js release
iojs latest stable io.js release
lts/* latest LTS Node.js release
8 latest 8.x release
7 latest 7.x release
6 latest 6.x release
5 latest 5.x release
4 latest 4.x release
With the very exact versions (node 4.7 y?) travis might silently skip to another minor updated version anyway. So all in all I would love to see more builds for more systems. Especially since my whole toolchain is more "modern node.js" than what is used here.
Can you update this to a bunch ore build targets please? | 1.0 | Outdated build targets in .travis.yml - I would like to propse a update to the used node js versions.
Here is what travis supports:
node latest stable Node.js release
iojs latest stable io.js release
lts/* latest LTS Node.js release
8 latest 8.x release
7 latest 7.x release
6 latest 6.x release
5 latest 5.x release
4 latest 4.x release
With the very exact versions (node 4.7 y?) travis might silently skip to another minor updated version anyway. So all in all I would love to see more builds for more systems. Especially since my whole toolchain is more "modern node.js" than what is used here.
Can you update this to a bunch ore build targets please? | test | outdated build targets in travis yml i would like to propse a update to the used node js versions here is what travis supports node latest stable node js release iojs latest stable io js release lts latest lts node js release latest x release latest x release latest x release latest x release latest x release with the very exact versions node y travis might silently skip to another minor updated version anyway so all in all i would love to see more builds for more systems especially since my whole toolchain is more modern node js than what is used here can you update this to a bunch ore build targets please | 1 |
262,738 | 22,954,705,122 | IssuesEvent | 2022-07-19 10:28:28 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | Release 4.3.6 - Revision 1 - Release Candidate RC1 - Footprint Metrics - ALL-EXCEPT-ROOTCHECK (4h) | release test/4.3.6 | ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14188 |
| **Main footprint metrics issue #** | #14274 |
| **Version** | 4.3.6 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.6-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3445/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_manager_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/logs/ossec_Test_stress_B3445_manager_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3445_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/monitor-manager-Test_stress_B3445_manager-pre-release.csv)
[Test_stress_B3445_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/Test_stress_B3445_manager_analysisd_state.csv)
[Test_stress_B3445_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/Test_stress_B3445_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_centos_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/logs/ossec_Test_stress_B3445_centos_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3445_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/data/monitor-agent-Test_stress_B3445_centos-pre-release.csv)
[Test_stress_B3445_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/data/Test_stress_B3445_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_ubuntu_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/logs/ossec_Test_stress_B3445_ubuntu_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3445_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/data/monitor-agent-Test_stress_B3445_ubuntu-pre-release.csv)
[Test_stress_B3445_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/data/Test_stress_B3445_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_windows_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/logs/ossec_Test_stress_B3445_windows_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3445_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/data/monitor-winagent-Test_stress_B3445_windows-pre-release.csv)
[Test_stress_B3445_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/data/Test_stress_B3445_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | 1.0 | Release 4.3.6 - Revision 1 - Release Candidate RC1 - Footprint Metrics - ALL-EXCEPT-ROOTCHECK (4h) - ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14188 |
| **Main footprint metrics issue #** | #14274 |
| **Version** | 4.3.6 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.6-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3445/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_manager_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/logs/ossec_Test_stress_B3445_manager_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3445_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/monitor-manager-Test_stress_B3445_manager-pre-release.csv)
[Test_stress_B3445_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/Test_stress_B3445_manager_analysisd_state.csv)
[Test_stress_B3445_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_manager_centos/data/Test_stress_B3445_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_centos_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/logs/ossec_Test_stress_B3445_centos_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3445_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/data/monitor-agent-Test_stress_B3445_centos-pre-release.csv)
[Test_stress_B3445_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_centos/data/Test_stress_B3445_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_ubuntu_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/logs/ossec_Test_stress_B3445_ubuntu_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3445_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/data/monitor-agent-Test_stress_B3445_ubuntu-pre-release.csv)
[Test_stress_B3445_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_ubuntu/data/Test_stress_B3445_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3445_windows_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/logs/ossec_Test_stress_B3445_windows_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3445_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/data/monitor-winagent-Test_stress_B3445_windows-pre-release.csv)
[Test_stress_B3445_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3445-240m/B3445_agent_windows/data/Test_stress_B3445_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | test | release revision release candidate footprint metrics all except rootcheck footprint metrics information main release candidate issue main footprint metrics issue version release candidate tag stress test documentation packages used repository packages dev wazuh com package path pre release package revision jenkins build manager plots logs and configuration csv centos agent plots logs and configuration csv ubuntu agent plots logs and configuration csv windows agent plots logs and configuration csv macos agent plots logs and configuration csv solaris agent plots logs and configuration csv | 1 |
91,689 | 8,316,950,050 | IssuesEvent | 2018-09-25 10:31:33 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | opened | testing 01 : ApiV1SkillsGetQueryParamNegativeNumberPage | testing 01 | Project : testing 01
Job : UAT Env
Env : UAT Env
Region : US_WEST_2
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 25 Sep 2018 10:31:31 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/skills?page=-1
Request :
Response :
{
"timestamp" : "2018-09-25T10:31:32.685+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/skills"
}
Logs :
Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | 1.0 | testing 01 : ApiV1SkillsGetQueryParamNegativeNumberPage - Project : testing 01
Job : UAT Env
Env : UAT Env
Region : US_WEST_2
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 25 Sep 2018 10:31:31 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/skills?page=-1
Request :
Response :
{
"timestamp" : "2018-09-25T10:31:32.685+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/skills"
}
Logs :
Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | test | testing project testing job uat env env uat env region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api skills logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot | 1 |
1,679 | 2,603,969,427 | IssuesEvent | 2015-02-24 18:59:53 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳龟头疱疹怎么办 | auto-migrated Priority-Medium Type-Defect | ```
沈阳龟头疱疹怎么办〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:22 | 1.0 | 沈阳龟头疱疹怎么办 - ```
沈阳龟头疱疹怎么办〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:22 | non_test | 沈阳龟头疱疹怎么办 沈阳龟头疱疹怎么办〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at | 0 |
27,999 | 4,349,769,243 | IssuesEvent | 2016-07-30 19:59:22 | MohammadYounes/AlertifyJS | https://api.github.com/repos/MohammadYounes/AlertifyJS | closed | Uncaught TypeError | cannot reproduce needs test case | Cannot read property 'glossary' of undefined(anonymous function)
` alertify.defaults.glossary.title = 'أليرتفاي جي اس';
alertify.defaults.glossary.ok = 'موافق';
alertify.defaults.glossary.cancel = 'إلغاء';`
when trying to override defaults settings . | 1.0 | Uncaught TypeError - Cannot read property 'glossary' of undefined(anonymous function)
` alertify.defaults.glossary.title = 'أليرتفاي جي اس';
alertify.defaults.glossary.ok = 'موافق';
alertify.defaults.glossary.cancel = 'إلغاء';`
when trying to override defaults settings . | test | uncaught typeerror cannot read property glossary of undefined anonymous function alertify defaults glossary title أليرتفاي جي اس alertify defaults glossary ok موافق alertify defaults glossary cancel إلغاء when trying to override defaults settings | 1 |
638,088 | 20,712,547,782 | IssuesEvent | 2022-03-12 05:21:48 | AY2122S2-CS2113T-T10-1/tp | https://api.github.com/repos/AY2122S2-CS2113T-T10-1/tp | closed | > prompt does not have a space after it | priority.Low backend.ui | Prompt currently shows as:
\>input
When it should be
\> input
Let's add a space to the > string to fix this. | 1.0 | > prompt does not have a space after it - Prompt currently shows as:
\>input
When it should be
\> input
Let's add a space to the > string to fix this. | non_test | prompt does not have a space after it prompt currently shows as input when it should be input let s add a space to the string to fix this | 0 |
113,501 | 9,651,590,369 | IssuesEvent | 2019-05-18 09:19:41 | andrewschultz/stale-tales-slate | https://api.github.com/repos/andrewschultz/stale-tales-slate | opened | merge "block X is not listed in any rulebook" in STS Common.txt | Roiling Shuffling architectural invalid/unplaytestable | (gq block listed)
Then run a check (verb) search in each main code branch. | 1.0 | merge "block X is not listed in any rulebook" in STS Common.txt - (gq block listed)
Then run a check (verb) search in each main code branch. | test | merge block x is not listed in any rulebook in sts common txt gq block listed then run a check verb search in each main code branch | 1 |
160,000 | 25,093,594,375 | IssuesEvent | 2022-11-08 08:34:23 | MetaMask/metamask-extension | https://api.github.com/repos/MetaMask/metamask-extension | closed | Update the address component | type-enhancement type-security area-addressBook design-system transaction safety team | ## Background
This component is used on the "to" address when you go to activity and click on a previous transaction (screenshot below). It might be used in other places too (The assigned developer should check in the code directly). We need to update the current logic for this component as described below.

## Acceptance Criteria
1. If the address exists in the user account list, then the account name is displayed (this should be the first thing checked, if it does exist in user account list we don't need to check the rest);
2. If the address exists in the user address book (contacts), then their nickname is displayed (this should be already existing logic for this component - this should be second thing checked);
3. If the address exists in the contract metadata list, then the contract's name is displayed (third thing checked);
4. If the address doesn't exist in address book or contract metadata, then the address is displayed;
5. Clicking on this component open the 'add/edit nickname' popup (1st screenshot below) or this tooltip (2nd screenshot below) if it's one of the user's accounts.


## Steps to Reproduce
1. Send a transaction;
2. Go to activity;
3. Click on the transaction to see the details;
4. Notice the component displayed for the address in that page. | 1.0 | Update the address component - ## Background
This component is used on the "to" address when you go to activity and click on a previous transaction (screenshot below). It might be used in other places too (The assigned developer should check in the code directly). We need to update the current logic for this component as described below.

## Acceptance Criteria
1. If the address exists in the user account list, then the account name is displayed (this should be the first thing checked, if it does exist in user account list we don't need to check the rest);
2. If the address exists in the user address book (contacts), then their nickname is displayed (this should be already existing logic for this component - this should be second thing checked);
3. If the address exists in the contract metadata list, then the contract's name is displayed (third thing checked);
4. If the address doesn't exist in address book or contract metadata, then the address is displayed;
5. Clicking on this component open the 'add/edit nickname' popup (1st screenshot below) or this tooltip (2nd screenshot below) if it's one of the user's accounts.


## Steps to Reproduce
1. Send a transaction;
2. Go to activity;
3. Click on the transaction to see the details;
4. Notice the component displayed for the address in that page. | non_test | update the address component background this component is used on the to address when you go to activity and click on a previous transaction screenshot below it might be used in other places too the assigned developer should check in the code directly we need to update the current logic for this component as described below acceptance criteria if the address exists in the user account list then the account name is displayed this should be the first thing checked if it does exist in user account list we don t need to check the rest if the address exists in the user address book contacts then their nickname is displayed this should be already existing logic for this component this should be second thing checked if the address exists in the contract metadata list then the contract s name is displayed third thing checked if the address doesn t exist in address book or contract metadata then the address is displayed clicking on this component open the add edit nickname popup screenshot below or this tooltip screenshot below if it s one of the user s accounts steps to reproduce send a transaction go to activity click on the transaction to see the details notice the component displayed for the address in that page | 0 |
34,513 | 4,931,477,294 | IssuesEvent | 2016-11-28 10:19:28 | TheScienceMuseum/collectionsonline | https://api.github.com/repos/TheScienceMuseum/collectionsonline | closed | Rights field is appearing as undefined | Awaiting Images bug please-test priority-3 | /objects/smgc-objects-8363471
The rights field is appearing as undefined?
Assume this is where it is being pulled from:
<img width="511" alt="screen shot 2016-10-06 at 13 16 18" src="https://cloud.githubusercontent.com/assets/91365/19151887/227fed58-8bc7-11e6-8206-ecc2a4bcc228.png">
| 1.0 | Rights field is appearing as undefined - /objects/smgc-objects-8363471
The rights field is appearing as undefined?
Assume this is where it is being pulled from:
<img width="511" alt="screen shot 2016-10-06 at 13 16 18" src="https://cloud.githubusercontent.com/assets/91365/19151887/227fed58-8bc7-11e6-8206-ecc2a4bcc228.png">
| test | rights field is appearing as undefined objects smgc objects the rights field is appearing as undefined assume this is where it is being pulled from img width alt screen shot at src | 1 |
354,386 | 10,566,631,970 | IssuesEvent | 2019-10-05 20:13:32 | AY1920S1-CS2113-T14-1/main | https://api.github.com/repos/AY1920S1-CS2113-T14-1/main | opened | Create a window to display patients | component.UI priority.High type.Task | **UI elements**
- Scrollable list window
- Panel to store patient's information | 1.0 | Create a window to display patients - **UI elements**
- Scrollable list window
- Panel to store patient's information | non_test | create a window to display patients ui elements scrollable list window panel to store patient s information | 0 |
39,418 | 5,233,254,055 | IssuesEvent | 2017-01-30 12:16:47 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | test-cmd-util.sh: `kubectl delete deployment nginx` hangs | area/test area/workload-api/deployment priority/failing-test sig/cli | test-cmd-util.sh tests have been failing without blocking the merge queue for several weeks (https://github.com/kubernetes/kubernetes/issues/39168)
while re-enabling the tests in https://github.com/kubernetes/kubernetes/pull/40428, a failure was detected here:
```
# Deletion of both deployments should not be blocked
kubectl delete deployment nginx2 "${kube_flags[@]}"
# Clean up
> kubectl delete deployment nginx "${kube_flags[@]}"
```
the marked line loops getting this response from the server:
```
I0125 12:47:55.160339 891 round_trippers.go:395] GET http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/default/deployments/nginx
I0125 12:47:55.160374 891 round_trippers.go:402] Request Headers:
I0125 12:47:55.160383 891 round_trippers.go:405] Accept: application/json, */*
I0125 12:47:55.160390 891 round_trippers.go:405] User-Agent: kubectl/v0.0.0 (darwin/amd64) kubernetes/$Format
I0125 12:47:55.161911 891 round_trippers.go:420] Response Status: 200 OK in 1 milliseconds
I0125 12:47:55.161931 891 round_trippers.go:423] Response Headers:
I0125 12:47:55.161938 891 round_trippers.go:426] Content-Type: application/json
I0125 12:47:55.161944 891 round_trippers.go:426] Date: Wed, 25 Jan 2017 17:47:55 GMT
I0125 12:47:55.161949 891 round_trippers.go:426] Content-Length: 1989
I0125 12:47:55.162014 891 request.go:991] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"nginx","namespace":"default","selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/nginx","uid":"56470d21-e326-11e6-b4f5-acbc32c1ca87","resourceVersion":"1241","generation":10,"creationTimestamp":"2017-01-25T17:47:27Z","labels":{"name":"nginx-undo"},"annotations":{"deployment.kubernetes.io/revision":"5","kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"Deployment\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"nginx\",\"creationTimestamp\":null,\"labels\":{\"name\":\"nginx-undo\"}},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx-undo\"}},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"name\":\"nginx-undo\"}},\"spec\":{\"containers\":[{\"name\":\"nginx\",\"image\":\"gcr.io/google-containers/nginx:1.7.9\",\"ports\":[{\"containerPort\":80}],\"resources\":{}}]}},\"strategy\":{}},\"status\":{}}\n"}},"spec":{"replicas":0,"selector":{"matchLabels":{"name":"nginx-undo"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"nginx-undo"}},"spec":{"containers":[{"name":"nginx","image":"gcr.io/google-containers/nginx:test-cmd","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulername":"default-scheduler"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1,"maxSurge":1}},"revisionHistoryLimit":0,"paused":true},"status":{"observedGeneration":9,"replicas":4,"updatedReplicas":2,"unavailableReplicas":4,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2017-01-25T17:47:27Z","lastTransitionTime":"2017-01-25T17:47:27Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."}]}}
```
I've switched that to a non-cascading delete in order to re-enable the tests, but we need to figure out why the delete is now hanging, fix the hang, and re-enable the cascading delete | 2.0 | test-cmd-util.sh: `kubectl delete deployment nginx` hangs - test-cmd-util.sh tests have been failing without blocking the merge queue for several weeks (https://github.com/kubernetes/kubernetes/issues/39168)
while re-enabling the tests in https://github.com/kubernetes/kubernetes/pull/40428, a failure was detected here:
```
# Deletion of both deployments should not be blocked
kubectl delete deployment nginx2 "${kube_flags[@]}"
# Clean up
> kubectl delete deployment nginx "${kube_flags[@]}"
```
the marked line loops getting this response from the server:
```
I0125 12:47:55.160339 891 round_trippers.go:395] GET http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/default/deployments/nginx
I0125 12:47:55.160374 891 round_trippers.go:402] Request Headers:
I0125 12:47:55.160383 891 round_trippers.go:405] Accept: application/json, */*
I0125 12:47:55.160390 891 round_trippers.go:405] User-Agent: kubectl/v0.0.0 (darwin/amd64) kubernetes/$Format
I0125 12:47:55.161911 891 round_trippers.go:420] Response Status: 200 OK in 1 milliseconds
I0125 12:47:55.161931 891 round_trippers.go:423] Response Headers:
I0125 12:47:55.161938 891 round_trippers.go:426] Content-Type: application/json
I0125 12:47:55.161944 891 round_trippers.go:426] Date: Wed, 25 Jan 2017 17:47:55 GMT
I0125 12:47:55.161949 891 round_trippers.go:426] Content-Length: 1989
I0125 12:47:55.162014 891 request.go:991] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"nginx","namespace":"default","selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/nginx","uid":"56470d21-e326-11e6-b4f5-acbc32c1ca87","resourceVersion":"1241","generation":10,"creationTimestamp":"2017-01-25T17:47:27Z","labels":{"name":"nginx-undo"},"annotations":{"deployment.kubernetes.io/revision":"5","kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"Deployment\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"nginx\",\"creationTimestamp\":null,\"labels\":{\"name\":\"nginx-undo\"}},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx-undo\"}},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"name\":\"nginx-undo\"}},\"spec\":{\"containers\":[{\"name\":\"nginx\",\"image\":\"gcr.io/google-containers/nginx:1.7.9\",\"ports\":[{\"containerPort\":80}],\"resources\":{}}]}},\"strategy\":{}},\"status\":{}}\n"}},"spec":{"replicas":0,"selector":{"matchLabels":{"name":"nginx-undo"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"nginx-undo"}},"spec":{"containers":[{"name":"nginx","image":"gcr.io/google-containers/nginx:test-cmd","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulername":"default-scheduler"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1,"maxSurge":1}},"revisionHistoryLimit":0,"paused":true},"status":{"observedGeneration":9,"replicas":4,"updatedReplicas":2,"unavailableReplicas":4,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2017-01-25T17:47:27Z","lastTransitionTime":"2017-01-25T17:47:27Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."}]}}
```
I've switched that to a non-cascading delete in order to re-enable the tests, but we need to figure out why the delete is now hanging, fix the hang, and re-enable the cascading delete | test | test cmd util sh kubectl delete deployment nginx hangs test cmd util sh tests have been failing without blocking the merge queue for several weeks while re enabling the tests in a failure was detected here deletion of both deployments should not be blocked kubectl delete deployment kube flags clean up kubectl delete deployment nginx kube flags the marked line loops getting this response from the server round trippers go get round trippers go request headers round trippers go accept application json round trippers go user agent kubectl darwin kubernetes format round trippers go response status ok in milliseconds round trippers go response headers round trippers go content type application json round trippers go date wed jan gmt round trippers go content length request go response body kind deployment apiversion extensions metadata name nginx namespace default selflink apis extensions namespaces default deployments nginx uid resourceversion generation creationtimestamp labels name nginx undo annotations deployment kubernetes io revision kubectl kubernetes io last applied configuration kind deployment apiversion extensions metadata name nginx creationtimestamp null labels name nginx undo spec replicas selector matchlabels name nginx undo template metadata creationtimestamp null labels name nginx undo spec containers resources strategy status n spec replicas selector matchlabels name nginx undo template metadata creationtimestamp null labels name nginx undo spec containers resources terminationmessagepath dev termination log terminationmessagepolicy file imagepullpolicy ifnotpresent restartpolicy always terminationgraceperiodseconds dnspolicy clusterfirst securitycontext schedulername default scheduler strategy type rollingupdate rollingupdate maxunavailable maxsurge revisionhistorylimit paused true status observedgeneration replicas updatedreplicas unavailablereplicas conditions i ve switched that to a non cascading delete in order to re enable the tests but we need to figure out why the delete is now hanging fix the hang and re enable the cascading delete | 1 |
43,989 | 5,579,661,591 | IssuesEvent | 2017-03-28 15:02:48 | UNFPAInnovation/GetIn_Mobile | https://api.github.com/repos/UNFPAInnovation/GetIn_Mobile | closed | Android 6.0 crash | bug Ready to test | There is a change in at least API 23 that is causing a null pointer reference to the database in the ModelContentProvider. | 1.0 | Android 6.0 crash - There is a change in at least API 23 that is causing a null pointer reference to the database in the ModelContentProvider. | test | android crash there is a change in at least api that is causing a null pointer reference to the database in the modelcontentprovider | 1 |
161,701 | 12,559,411,261 | IssuesEvent | 2020-06-07 18:52:20 | GTNewHorizons/GT-New-Horizons-Modpack | https://api.github.com/repos/GTNewHorizons/GT-New-Horizons-Modpack | closed | Repeatable quest dnt work | FixedInDev need to be tested | #### Which modpack version are you using?
2.0.8.3
#
#### If in multiplayer; On which server does this happen?
eta.gtnewhorizons.com
Repeatable quest dnt work

| 1.0 | Repeatable quest dnt work - #### Which modpack version are you using?
2.0.8.3
#
#### If in multiplayer; On which server does this happen?
eta.gtnewhorizons.com
Repeatable quest dnt work

| test | repeatable quest dnt work which modpack version are you using if in multiplayer on which server does this happen eta gtnewhorizons com repeatable quest dnt work | 1 |
56,128 | 6,501,941,018 | IssuesEvent | 2017-08-23 11:46:47 | NoFear23m/ComplianceManager | https://api.github.com/repos/NoFear23m/ComplianceManager | closed | Support für Gruppierung der Liste implementieren | feature wait for testing | Es wäre super wenn man auch die Liste (Hauptliste) gruppieren könnte. | 1.0 | Support für Gruppierung der Liste implementieren - Es wäre super wenn man auch die Liste (Hauptliste) gruppieren könnte. | test | support für gruppierung der liste implementieren es wäre super wenn man auch die liste hauptliste gruppieren könnte | 1 |
286,765 | 24,783,017,677 | IssuesEvent | 2022-10-24 07:28:33 | design-group/ignition-gateway-utilities | https://api.github.com/repos/design-group/ignition-gateway-utilities | opened | Perspective View Linting | automated-test enhancement | Currently `pylint` is being used for linting any python files provided. However it would be good if it was possible to lint the `view.json` files included as well to look for poor practices.
Examples to catch:
1. Use of the `getChild` or `getSibling` component methods.
2. Components being named `Label`, `Label0`, `Label1`, etc.
3. Long Transform scripts that should be a project script.
4. Unnecessary Coordinate containers (This one could be tricky.)
5. Incorrect view structure naming
6. The use of the phrase "Template" instead of "Embedded View"
7. Unused views? Could be tricky as well. | 1.0 | Perspective View Linting - Currently `pylint` is being used for linting any python files provided. However it would be good if it was possible to lint the `view.json` files included as well to look for poor practices.
Examples to catch:
1. Use of the `getChild` or `getSibling` component methods.
2. Components being named `Label`, `Label0`, `Label1`, etc.
3. Long Transform scripts that should be a project script.
4. Unnecessary Coordinate containers (This one could be tricky.)
5. Incorrect view structure naming
6. The use of the phrase "Template" instead of "Embedded View"
7. Unused views? Could be tricky as well. | test | perspective view linting currently pylint is being used for linting any python files provided however it would be good if it was possible to lint the view json files included as well to look for poor practices examples to catch use of the getchild or getsibling component methods components being named label etc long transform scripts that should be a project script unnecessary coordinate containers this one could be tricky incorrect view structure naming the use of the phrase template instead of embedded view unused views could be tricky as well | 1 |
126,464 | 10,423,610,227 | IssuesEvent | 2019-09-16 11:52:35 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | opened | openjdk8_j9_special.system_x86-64_linux | comp:jit test failure | https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-64_linux_Nightly/348
MauveMultiThreadLoadTest_special_26
variation: Mode612-OSRG
JVM_OPTIONS: -Xcompressedrefs -Xcompressedrefs -Xgcpolicy:gencon -Xjit:enableOSR,enableOSROnGuardFailure,count=1,disableAsyncCompilation
```
LT 03:29:27.993 - Test failed
LT Failure num. = 1
LT Test number = 1712
LT Test details = 'Mauve[gnu.testlet.java.lang.Integer.Tests15]'
LT Suite number = 0
LT Thread number = 5
```
```
03:43:30 To rebuild the failed test in a jenkins job, copy the following link and fill out the <Jenkins URL> and <FAILED test target>:
03:43:30 <Jenkins URL>/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mauveLoadTest&JenkinsFile=openjdk_x86-64_linux&TARGET=<FAILED test target>
03:43:30
03:43:30 For example, to rebuild the failed tests in <Jenkins URL>=https://ci.adoptopenjdk.net/job/Grinder, use the following links:
03:43:30 https://ci.adoptopenjdk.net/job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mauveLoadTest&JenkinsFile=openjdk_x86-64_linux&TARGET=MauveMultiThreadLoadTest_special_26
``` | 1.0 | openjdk8_j9_special.system_x86-64_linux - https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-64_linux_Nightly/348
MauveMultiThreadLoadTest_special_26
variation: Mode612-OSRG
JVM_OPTIONS: -Xcompressedrefs -Xcompressedrefs -Xgcpolicy:gencon -Xjit:enableOSR,enableOSROnGuardFailure,count=1,disableAsyncCompilation
```
LT 03:29:27.993 - Test failed
LT Failure num. = 1
LT Test number = 1712
LT Test details = 'Mauve[gnu.testlet.java.lang.Integer.Tests15]'
LT Suite number = 0
LT Thread number = 5
```
```
03:43:30 To rebuild the failed test in a jenkins job, copy the following link and fill out the <Jenkins URL> and <FAILED test target>:
03:43:30 <Jenkins URL>/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mauveLoadTest&JenkinsFile=openjdk_x86-64_linux&TARGET=<FAILED test target>
03:43:30
03:43:30 For example, to rebuild the failed tests in <Jenkins URL>=https://ci.adoptopenjdk.net/job/Grinder, use the following links:
03:43:30 https://ci.adoptopenjdk.net/job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mauveLoadTest&JenkinsFile=openjdk_x86-64_linux&TARGET=MauveMultiThreadLoadTest_special_26
``` | test | special system linux mauvemultithreadloadtest special variation osrg jvm options xcompressedrefs xcompressedrefs xgcpolicy gencon xjit enableosr enableosronguardfailure count disableasynccompilation lt test failed lt failure num lt test number lt test details mauve lt suite number lt thread number to rebuild the failed test in a jenkins job copy the following link and fill out the and parambuild jdk version jdk impl build list system mauveloadtest jenkinsfile openjdk linux target for example to rebuild the failed tests in use the following links | 1 |
261,728 | 22,771,286,103 | IssuesEvent | 2022-07-08 10:14:10 | Quick-Event/quickbox | https://api.github.com/repos/Quick-Event/quickbox | closed | Keep runners order draw option does not work | bug 1 testing | It seems that runners are drawn according to ID not according to their previous start time.
| 1.0 | Keep runners order draw option does not work - It seems that runners are drawn according to ID not according to their previous start time.
| test | keep runners order draw option does not work it seems that runners are drawn according to id not according to their previous start time | 1 |
307,531 | 26,538,855,374 | IssuesEvent | 2023-01-19 17:34:35 | TestIntegrations/TestForwarding | https://api.github.com/repos/TestIntegrations/TestForwarding | opened | Tests4 | tag2 tag1 forwardeddddddTest tag3 ddw | # :clipboard: Bug Details
>Tests4
key | value
--|--
Reported At | 2022-08-21 16:19:44 UTC
Email | test@instabug.com
Categories | Report a bug
Tags | tag2, tag1, forwardeddddddTest, tag3, ddw
App Version | 10.1 (4)
Session Duration | 3
Device | arm64, iOS 14.5
Display | 414x896 (@2x)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/birdy-demo-app/beta/bugs/9272?utm_source=github&utm_medium=integrations) :point_left:
___
# :warning: Looking for More Details?
1. **Session Profiler**: [**enable them**](https://docs.instabug.com/docs/ios-session-profiler?utm_source=github&utm_medium=integrations) to get the most out of your plan and see the changes in the CPU, memory, storage, connectivity, battery, and orientation of the app before the bug was reported.
2. **Network Log**: we are unable to capture your network requests automatically. If you are using AFNetworking or Alamofire, [**check the details mentioned here**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations#section-requests-not-appearing-in-logs).
3. **User Steps**: [**enable them**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations#section-user-steps) to get the most out of your plan and see every action users take before reporting a bug.
4. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations).
5. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations).
6. **Console Log**: when enabled you will see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations). | 1.0 | Tests4 - # :clipboard: Bug Details
>Tests4
key | value
--|--
Reported At | 2022-08-21 16:19:44 UTC
Email | test@instabug.com
Categories | Report a bug
Tags | tag2, tag1, forwardeddddddTest, tag3, ddw
App Version | 10.1 (4)
Session Duration | 3
Device | arm64, iOS 14.5
Display | 414x896 (@2x)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/birdy-demo-app/beta/bugs/9272?utm_source=github&utm_medium=integrations) :point_left:
___
# :warning: Looking for More Details?
1. **Session Profiler**: [**enable them**](https://docs.instabug.com/docs/ios-session-profiler?utm_source=github&utm_medium=integrations) to get the most out of your plan and see the changes in the CPU, memory, storage, connectivity, battery, and orientation of the app before the bug was reported.
2. **Network Log**: we are unable to capture your network requests automatically. If you are using AFNetworking or Alamofire, [**check the details mentioned here**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations#section-requests-not-appearing-in-logs).
3. **User Steps**: [**enable them**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations#section-user-steps) to get the most out of your plan and see every action users take before reporting a bug.
4. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations).
5. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations).
6. **Console Log**: when enabled you will see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/ios-logging?utm_source=github&utm_medium=integrations). | test | clipboard bug details key value reported at utc email test instabug com categories report a bug tags forwardeddddddtest ddw app version session duration device ios display point right point left warning looking for more details session profiler to get the most out of your plan and see the changes in the cpu memory storage connectivity battery and orientation of the app before the bug was reported network log we are unable to capture your network requests automatically if you are using afnetworking or alamofire user steps to get the most out of your plan and see every action users take before reporting a bug user events start capturing custom user events to send them along with each report instabug log start adding instabug logs to see them right inside each report you receive console log when enabled you will see them right inside each report you receive | 1 |
440,704 | 12,703,055,450 | IssuesEvent | 2020-06-22 21:25:41 | kubernetes-sigs/cluster-api-provider-azure | https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-azure | closed | 3 control plane E2E test is flaky | kind/bug priority/important-soon | /kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
See https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-azure#pr-e2e for recent history. `Workoad cluster creation Create multiple controlplane cluster with machine deployments Should create a 3 node cluster` has failed multiple times recently. The test times out at 600s with 2 control planes ready out of 3.
**Environment:**
- cluster-api-provider-azure version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
| 1.0 | 3 control plane E2E test is flaky - /kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
See https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-azure#pr-e2e for recent history. `Workoad cluster creation Create multiple controlplane cluster with machine deployments Should create a 3 node cluster` has failed multiple times recently. The test times out at 600s with 2 control planes ready out of 3.
**Environment:**
- cluster-api-provider-azure version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
| non_test | control plane test is flaky kind bug what steps did you take and what happened see for recent history workoad cluster creation create multiple controlplane cluster with machine deployments should create a node cluster has failed multiple times recently the test times out at with control planes ready out of environment cluster api provider azure version kubernetes version use kubectl version os e g from etc os release | 0 |
199,029 | 15,020,962,120 | IssuesEvent | 2021-02-01 15:17:33 | oasisprotocol/oasis-core | https://api.github.com/repos/oasisprotocol/oasis-core | opened | beacon: Add more test cases | c:testing golang | The beacon could use more test cases, as the ones that are present are fairly basic.
* [ ] There probably should be test cases for the node being restarted mid-protocol round.
| 1.0 | beacon: Add more test cases - The beacon could use more test cases, as the ones that are present are fairly basic.
* [ ] There probably should be test cases for the node being restarted mid-protocol round.
| test | beacon add more test cases the beacon could use more test cases as the ones that are present are fairly basic there probably should be test cases for the node being restarted mid protocol round | 1 |
98,177 | 16,361,446,377 | IssuesEvent | 2021-05-14 10:05:48 | Galaxy-Software-Service/Maven_Pom_Demo | https://api.github.com/repos/Galaxy-Software-Service/Maven_Pom_Demo | opened | CVE-2018-12022 (High) detected in jackson-databind-2.0.6.jar | security vulnerability | ## CVE-2018-12022 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: Maven_Pom_Demo/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.6/jackson-databind-2.0.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Galaxy-Software-Service/Maven_Pom_Demo/commit/69cce4bac0c1b37088c48547695b174bd6149c5c">69cce4bac0c1b37088c48547695b174bd6149c5c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Jodd-db jar (for database access for the Jodd framework) in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-03-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12022>CVE-2018-12022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p>
<p>Release Date: 2019-03-21</p>
<p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2018-12022","vulnerabilityDetails":"An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Jodd-db jar (for database access for the Jodd framework) in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12022","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-12022 (High) detected in jackson-databind-2.0.6.jar - ## CVE-2018-12022 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: Maven_Pom_Demo/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.6/jackson-databind-2.0.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Galaxy-Software-Service/Maven_Pom_Demo/commit/69cce4bac0c1b37088c48547695b174bd6149c5c">69cce4bac0c1b37088c48547695b174bd6149c5c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Jodd-db jar (for database access for the Jodd framework) in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-03-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12022>CVE-2018-12022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p>
<p>Release Date: 2019-03-21</p>
<p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2018-12022","vulnerabilityDetails":"An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Jodd-db jar (for database access for the Jodd framework) in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12022","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file maven pom demo pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in fasterxml jackson databind prior to and when default typing is enabled either globally or for a specific property the service has the jodd db jar for database access for the jodd framework in the classpath and an attacker can provide an ldap service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails an issue was discovered in fasterxml jackson databind prior to and when default typing is enabled either globally or for a specific property the service has the jodd db jar for database access for the jodd framework in the classpath and an attacker can provide an ldap service to access it is possible to make the service execute a malicious payload vulnerabilityurl | 0 |
42,774 | 7,000,706,833 | IssuesEvent | 2017-12-18 06:55:16 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | When clicked on " UserProfileMgtService admin service" it's redirected to the management console | Affected/5.4.0-Update1 Priority/Highest Type/Documentation | In [1] (section Password Reset via Recovery Email) when clicked on "UserProfileMgtService admin service" it's redirected to the management console. Please correct this as here document that explains the relevant admin service with available operations should load.
[1] https://docs.wso2.com/display/IS540/Forced+Password+Reset#ForcedPasswordReset-PasswordResetviaRecoveryEmail
| 1.0 | When clicked on " UserProfileMgtService admin service" it's redirected to the management console - In [1] (section Password Reset via Recovery Email) when clicked on "UserProfileMgtService admin service" it's redirected to the management console. Please correct this as here document that explains the relevant admin service with available operations should load.
[1] https://docs.wso2.com/display/IS540/Forced+Password+Reset#ForcedPasswordReset-PasswordResetviaRecoveryEmail
| non_test | when clicked on userprofilemgtservice admin service it s redirected to the management console in section password reset via recovery email when clicked on userprofilemgtservice admin service it s redirected to the management console please correct this as here document that explains the relevant admin service with available operations should load | 0 |
338,558 | 30,305,200,548 | IssuesEvent | 2023-07-10 09:01:10 | milvus-io/milvus | https://api.github.com/repos/milvus-io/milvus | opened | [Bug]: [benchmark][multi-replicas-loadbalance] The group NQ has a large merger at the beginning of search | kind/bug needs-triage test/benchmark | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:master-20230707-b7740835
- Deployment mode(standalone or cluster):cluster
- MQ type(rocksmq, pulsar or kafka):pulsar
- SDK version(e.g. pymilvus v2.0.0rc2):pymilvus-2.2.13
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
### 1. At the beginning of the search, the group NQ has a large merger, which makes the search very slow;
### 2. but the maxRT of the client is smaller than the maxRT of the proxy
server argo task: fouramf-4t5bt
client argo task: vector-db-concurrent-778vj
server:
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
lb-helm-multi-proxy-etcd-0 1/1 Running 0 3d2h 10.104.19.46 4am-node28 <none> <none>
lb-helm-multi-proxy-etcd-1 1/1 Running 0 3d2h 10.104.22.211 4am-node26 <none> <none>
lb-helm-multi-proxy-etcd-2 1/1 Running 0 3d2h 10.104.16.124 4am-node21 <none> <none>
lb-helm-multi-proxy-milvus-datacoord-7dc68d5546-lvwq2 1/1 Running 0 3d2h 10.104.9.21 4am-node14 <none> <none>
lb-helm-multi-proxy-milvus-datanode-5fc4db6887-zz7pq 1/1 Running 0 3d2h 10.104.19.44 4am-node28 <none> <none>
lb-helm-multi-proxy-milvus-indexcoord-64b4bdd64f-mkmt9 1/1 Running 0 3d2h 10.104.22.207 4am-node26 <none> <none>
lb-helm-multi-proxy-milvus-indexnode-758748d697-lq67m 1/1 Running 0 3d2h 10.104.20.78 4am-node22 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-2f466 1/1 Running 0 3d2h 10.104.23.169 4am-node27 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-6dffw 1/1 Running 0 3d2h 10.104.24.63 4am-node29 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-6m278 1/1 Running 0 3d2h 10.104.19.43 4am-node28 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-lfwn7 1/1 Running 0 3d2h 10.104.20.75 4am-node22 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-vpj26 1/1 Running 0 3d2h 10.104.22.208 4am-node26 <none> <none>
lb-helm-multi-proxy-milvus-querycoord-59d9f9765d-zl4zz 1/1 Running 0 3d2h 10.104.24.64 4am-node29 <none> <none>
lb-helm-multi-proxy-milvus-querynode-5b8bccd6b6-4886l 1/1 Running 0 3d2h 10.104.21.225 4am-node24 <none> <none>
lb-helm-multi-proxy-milvus-querynode-5b8bccd6b6-w488j 1/1 Running 0 3d2h 10.104.15.219 4am-node20 <none> <none>
lb-helm-multi-proxy-milvus-rootcoord-5d6cc775f8-dcxll 1/1 Running 0 3d2h 10.104.20.74 4am-node22 <none> <none>
lb-helm-multi-proxy-minio-0 1/1 Running 0 3d2h 10.104.16.119 4am-node21 <none> <none>
lb-helm-multi-proxy-minio-1 1/1 Running 0 3d2h 10.104.22.213 4am-node26 <none> <none>
lb-helm-multi-proxy-minio-2 1/1 Running 0 3d2h 10.104.1.185 4am-node10 <none> <none>
lb-helm-multi-proxy-minio-3 1/1 Running 0 3d2h 10.104.9.24 4am-node14 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-0 1/1 Running 0 3d2h 10.104.16.128 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-1 1/1 Running 0 3d2h 10.104.22.215 4am-node26 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-2 1/1 Running 0 3d2h 10.104.4.42 4am-node11 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-init-ffzj2 0/1 Completed 0 3d2h 10.104.24.61 4am-node29 <none> <none>
lb-helm-multi-proxy-pulsar-broker-0 1/1 Running 0 3d2h 10.104.9.22 4am-node14 <none> <none>
lb-helm-multi-proxy-pulsar-proxy-0 1/1 Running 0 3d2h 10.104.24.62 4am-node29 <none> <none>
lb-helm-multi-proxy-pulsar-pulsar-init-x99kb 0/1 Completed 0 3d2h 10.104.21.224 4am-node24 <none> <none>
lb-helm-multi-proxy-pulsar-recovery-0 1/1 Running 0 3d2h 10.104.16.114 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-0 1/1 Running 0 3d2h 10.104.16.121 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-1 1/1 Running 0 3d2h 10.104.18.86 4am-node25 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-2 1/1 Running 0 3d2h 10.104.15.221 4am-node20 <none> <none>
```
<img width="1867" alt="截屏2023-07-10 16 55 50" src="https://github.com/milvus-io/milvus/assets/26307815/51bbc69f-3d60-4570-80a7-c0b5c161ba53">
<img width="1866" alt="截屏2023-07-10 16 56 38" src="https://github.com/milvus-io/milvus/assets/26307815/f059db40-2be3-4307-889c-ba7da55eb6db">
<img width="1864" alt="截屏2023-07-10 16 56 55" src="https://github.com/milvus-io/milvus/assets/26307815/1c5285d4-6850-4773-80b4-42d5dd287e44">
<img width="1865" alt="截屏2023-07-10 16 57 15" src="https://github.com/milvus-io/milvus/assets/26307815/72b0017a-8501-4f75-91c7-6c613504d481">
client:
**Python multi-process concurrency**
```
[2023-07-10 03:56:29,257] - INFO: ---------------------------------------------- Concurrency task started! ----------------------------------------------- (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Parameters used:
{'database_params': {'metric_type': 'L2', 'dim': 128, 'max_length': 256},
'insert_params': {},
'connection_params': {'secure': False, 'port': 8443},
'collection_params': {'collection_name': 'fouram_VbFuv6W7'},
'index_params': {},
'load_params': {},
'search_params': {},
'concurrent_params': {'concurrent_number': 20,
'during_time': 1800,
'warm_time': 0,
'interval': 20},
'concurrent_tasks': [{'type': 'query',
'weight': 0,
'params': {'expr': 'id in [1, 10, 100, 1000]',
'timeout': 60}},
{'type': 'search',
'weight': 1,
'params': {'nq': 10,
'top_k': 10,
'search_param': {'nprobe': 64},
'timeout': 3600}}]} (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Get multiprocessing start method: forkserver (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Start initializing the concurrent pool (util_log.py:25)
[2023-07-10 03:56:51,305] - INFO: [MultiProcessConcurrent] Start waiting for 20 processes ready: 4.0s (util_log.py:25)
[2023-07-10 03:56:55,309] - INFO: [MultiProcessConcurrent] Start concurrent pool (util_log.py:25)
[2023-07-10 03:56:55,310] - INFO: [ParserResult] Starting sync report, interval:20s, intermediate state results are available for reference (util_log.py:25)
[2023-07-10 03:56:55,332] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:56:55,332] - INFO: --------------------------------------------------------------------------------------------------------------------- (util_log.py:25)
[2023-07-10 03:57:15,376] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:15,378] - INFO: search 1000 0(0.00%) | 397 309 479 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:57:35,431] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:35,432] - INFO: search 2000 0(0.00%) | 399 317 481 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:57:55,483] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:55,484] - INFO: search 3000 0(0.00%) | 399 316 484 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:15,521] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:15,522] - INFO: search 4000 0(0.00%) | 399 316 480 399 477 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:35,557] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:35,558] - INFO: search 5000 0(0.00%) | 399 321 476 400 474 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:55,606] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:55,608] - INFO: search 6000 0(0.00%) | 399 321 477 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:59:15,642] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:15,643] - INFO: search 7000 0(0.00%) | 399 318 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:59:35,693] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:35,693] - INFO: search 8020 0(0.00%) | 399 316 481 399 477 | 51.00 0.00 (util_log.py:25)
[2023-07-10 03:59:55,737] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:55,738] - INFO: search 9020 0(0.00%) | 398 319 477 398 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:15,771] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:15,772] - INFO: search 10020 0(0.00%) | 399 322 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:35,809] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:35,810] - INFO: search 11020 0(0.00%) | 399 322 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:55,857] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:55,858] - INFO: search 12020 0(0.00%) | 398 320 478 400 477 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:01:15,896] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:15,897] - INFO: search 13000 0(0.00%) | 407 316 745 399 725 | 49.00 0.00 (util_log.py:25)
[2023-07-10 04:01:35,936] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:35,937] - INFO: search 14000 0(0.00%) | 399 318 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:01:55,972] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:55,973] - INFO: search 15000 0(0.00%) | 399 318 484 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:16,008] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:16,009] - INFO: search 16000 0(0.00%) | 399 314 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:36,052] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:36,053] - INFO: search 17000 0(0.00%) | 399 315 481 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:56,093] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:56,094] - INFO: search 18020 0(0.00%) | 399 319 477 400 475 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:03:16,133] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:16,134] - INFO: search 19020 0(0.00%) | 398 318 482 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:03:36,164] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:36,164] - INFO: search 20020 0(0.00%) | 399 318 478 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:03:56,211] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:56,213] - INFO: search 21020 0(0.00%) | 399 317 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:16,265] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:16,266] - INFO: search 22020 0(0.00%) | 399 315 482 399 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:36,314] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:36,315] - INFO: search 23020 0(0.00%) | 399 316 479 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:56,354] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:56,355] - INFO: search 24020 0(0.00%) | 399 320 479 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:16,404] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:16,405] - INFO: search 25020 0(0.00%) | 399 318 478 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:36,455] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:36,456] - INFO: search 26020 0(0.00%) | 398 320 476 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:56,491] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:56,492] - INFO: search 27023 0(0.00%) | 398 320 478 399 474 | 50.15 0.00 (util_log.py:25)
[2023-07-10 04:06:16,541] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:16,542] - INFO: search 28040 0(0.00%) | 399 317 479 400 471 | 50.85 0.00 (util_log.py:25)
[2023-07-10 04:06:36,589] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:36,591] - INFO: search 29040 0(0.00%) | 400 323 477 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:06:56,624] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:56,624] - INFO: search 30028 0(0.00%) | 400 274 518 399 511 | 49.40 0.00 (util_log.py:25)
[2023-07-10 04:07:16,678] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:16,680] - INFO: search 31020 0(0.00%) | 401 375 560 399 553 | 49.60 0.00 (util_log.py:25)
[2023-07-10 04:07:36,728] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:36,729] - INFO: search 31988 0(0.00%) | 417 282 1224 399 1211 | 48.40 0.00 (util_log.py:25)
[2023-07-10 04:07:56,780] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:56,781] - INFO: search 32988 0(0.00%) | 399 374 419 398 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:16,815] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:16,816] - INFO: search 33988 0(0.00%) | 398 381 417 399 413 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:36,853] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:36,854] - INFO: search 34988 0(0.00%) | 399 379 422 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:56,904] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:56,905] - INFO: search 36000 0(0.00%) | 399 382 416 399 413 | 50.60 0.00 (util_log.py:25)
[2023-07-10 04:09:16,953] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:16,954] - INFO: search 37000 0(0.00%) | 399 377 419 399 414 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:09:36,988] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:36,989] - INFO: search 38000 0(0.00%) | 398 367 432 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:09:57,020] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:57,021] - INFO: search 39000 0(0.00%) | 399 377 421 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:10:17,061] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:17,062] - INFO: search 40000 0(0.00%) | 399 377 417 399 414 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:10:37,095] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:37,096] - INFO: search 41008 0(0.00%) | 399 380 420 399 414 | 50.40 0.00 (util_log.py:25)
[2023-07-10 04:10:57,129] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:57,130] - INFO: search 42008 0(0.00%) | 399 382 415 399 411 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:17,182] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:17,183] - INFO: search 43008 0(0.00%) | 399 374 423 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:37,228] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:37,229] - INFO: search 44008 0(0.00%) | 399 378 415 399 412 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:57,264] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:57,265] - INFO: search 45008 0(0.00%) | 399 378 421 399 415 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:12:17,315] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:17,316] - INFO: search 46020 0(0.00%) | 399 380 424 399 412 | 50.60 0.00 (util_log.py:25)
[2023-07-10 04:12:37,367] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:37,368] - INFO: search 47020 0(0.00%) | 399 375 423 399 418 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:12:57,406] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:57,407] - INFO: search 48020 0(0.00%) | 399 376 422 399 415 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:17,456] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:17,457] - INFO: search 49020 0(0.00%) | 400 330 552 399 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:37,501] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:37,501] - INFO: search 50020 0(0.00%) | 403 323 535 400 515 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:57,533] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:57,534] - INFO: search 51020 0(0.00%) | 399 319 481 399 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:17,566] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:17,567] - INFO: search 52020 0(0.00%) | 399 324 475 398 467 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:37,600] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:37,601] - INFO: search 53020 0(0.00%) | 399 322 473 400 470 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:57,631] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:57,632] - INFO: search 54020 0(0.00%) | 398 317 477 399 471 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:17,670] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:17,672] - INFO: search 55020 0(0.00%) | 399 317 479 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:37,711] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:37,712] - INFO: search 56020 0(0.00%) | 399 322 476 399 465 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:57,745] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:57,745] - INFO: search 57020 0(0.00%) | 399 325 475 399 465 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:17,795] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:17,796] - INFO: search 58020 0(0.00%) | 399 322 480 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:37,841] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:37,842] - INFO: search 59020 0(0.00%) | 399 326 479 398 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:57,895] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:57,896] - INFO: search 60031 0(0.00%) | 399 320 477 399 468 | 50.55 0.00 (util_log.py:25)
[2023-07-10 04:17:17,946] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:17,947] - INFO: search 61040 0(0.00%) | 399 323 477 398 466 | 50.45 0.00 (util_log.py:25)
[2023-07-10 04:17:37,988] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:37,990] - INFO: search 62040 0(0.00%) | 399 316 477 399 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:17:58,023] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:58,024] - INFO: search 63040 0(0.00%) | 398 321 476 399 470 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:18,076] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:18,078] - INFO: search 64040 0(0.00%) | 399 323 476 399 462 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:38,123] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:38,125] - INFO: search 65040 0(0.00%) | 398 323 476 400 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:58,170] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:58,171] - INFO: search 66040 0(0.00%) | 400 324 475 399 468 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:19:18,222] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:18,223] - INFO: search 67040 0(0.00%) | 398 324 475 400 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:19:38,268] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:38,270] - INFO: search 68041 0(0.00%) | 399 296 508 398 497 | 50.05 0.00 (util_log.py:25)
[2023-07-10 04:19:58,314] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:58,315] - INFO: search 69060 0(0.00%) | 398 319 477 399 474 | 50.95 0.00 (util_log.py:25)
[2023-07-10 04:20:18,364] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:18,365] - INFO: search 70060 0(0.00%) | 399 315 478 400 462 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:20:38,407] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:38,408] - INFO: search 71060 0(0.00%) | 399 319 479 400 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:20:58,450] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:58,451] - INFO: search 72060 0(0.00%) | 399 320 477 400 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:18,487] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:18,488] - INFO: search 73060 0(0.00%) | 399 327 476 399 469 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:38,535] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:38,536] - INFO: search 74060 0(0.00%) | 398 320 476 399 474 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:58,580] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:58,581] - INFO: search 75060 0(0.00%) | 398 323 476 398 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:18,620] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:18,621] - INFO: search 76060 0(0.00%) | 399 317 484 400 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:38,668] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:38,669] - INFO: search 77060 0(0.00%) | 399 322 479 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:58,715] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:58,717] - INFO: search 78080 0(0.00%) | 399 320 476 400 471 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:23:18,753] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:18,754] - INFO: search 79080 0(0.00%) | 398 323 476 399 457 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:23:38,782] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:38,783] - INFO: search 80080 0(0.00%) | 399 322 476 400 467 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:23:58,809] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:58,810] - INFO: search 81080 0(0.00%) | 399 324 476 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:18,843] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:18,844] - INFO: search 82080 0(0.00%) | 399 326 475 399 460 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:38,894] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:38,895] - INFO: search 83080 0(0.00%) | 399 321 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:59,150] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:59,151] - INFO: search 84080 0(0.00%) | 402 323 602 398 594 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:25:19,185] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:19,187] - INFO: search 85060 0(0.00%) | 407 321 740 400 726 | 49.00 0.00 (util_log.py:25)
[2023-07-10 04:25:39,219] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:39,221] - INFO: search 86060 0(0.00%) | 399 326 475 399 459 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:25:59,275] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:59,276] - INFO: search 87060 0(0.00%) | 399 321 476 399 469 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:26:19,318] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:19,320] - INFO: search 88080 0(0.00%) | 399 325 475 399 457 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:26:39,350] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:39,351] - INFO: search 89080 0(0.00%) | 399 320 478 400 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:26:55,668] - INFO: [MultiProcessConcurrent] End concurrent pool (util_log.py:25)
[2023-07-10 04:26:55,707] - INFO: ------------------------------------------------- Print final status ------------------------------------------------ (util_log.py:25)
[2023-07-10 04:26:55,707] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:55,724] - INFO: search 89900 0(0.00%) | 399 274 1224 399 474 | 49.94 0.00 (util_log.py:25)
[2023-07-10 04:26:55,724] - INFO: ------------------------ Print the status without start and end warmup time:0s as a reference ----------------------- (util_log.py:25)
[2023-07-10 04:26:55,725] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:55,739] - INFO: search 89900 0(0.00%) | 399 274 1224 399 474 | 49.94 0.00 (util_log.py:25)
[2023-07-10 04:26:55,739] - INFO: [ParserResult] Completed sync report (util_log.py:25)
[2023-07-10 04:26:55,950] - INFO: [MultiProcessConcurrent] Summary of overall results:
{'reqs': 89900,
'rps': 49.9351,
'total_time_s': 1800.3351,
'avg_time_ms': 399.6068,
'min_time_ms': 274.7896,
'max_time_ms': 1224.6462,
'median_time_ms': 399.6312,
'p95_time_ms': 457.3961,
'p99_time_ms': 474.754,
'sum_all_rt_ms': 35924648.072} (util_log.py:25)
[2023-07-10 04:26:57,871] - INFO: ---------------------------------------------- Concurrency task finished! ---------------------------------------------- (util_log.py:25)
```
### Expected Behavior
_No response_
### Steps To Reproduce
```markdown
1、deploy cluster Milvus with 5 proxys
2、prepare 5m data
3、concurrent search by vectordb-benchmark 《- python multi-process concurrency
```
### Milvus Log
_No response_
### Anything else?
fouramf-server-replicas-lb-2qn-5proxy:
```
extraConfigFiles:
user.yaml: |+
queryNode:
grouping:
maxNQ: 1000
proxy:
resources:
limits:
cpu: '4.0'
memory: 4Gi
requests:
cpu: '2.0'
memory: 2Gi
replicas: 5
queryNode:
resources:
limits:
cpu: '8.0'
memory: 8Gi
requests:
cpu: '4.0'
memory: 4Gi
replicas: 2
indexNode:
resources:
limits:
cpu: '4.0'
memory: 4Gi
requests:
cpu: '3.0'
memory: 3Gi
replicas: 1
dataNode:
resources:
limits:
cpu: '2.0'
memory: 2Gi
requests:
cpu: '2.0'
memory: 2Gi
```
fouramf-client-sift-ivfsq8-replica2-search:
```
load_params:
replica_number: 2
dataset_params:
dim: 128
dataset_name: sift
dataset_size: 5m
ni_per: 50000
metric_type: L2
index_params:
index_type: IVF_SQ8
index_param:
nlist: 2048
concurrent_params:
concurrent_number: 50
during_time: 2h
interval: 20
concurrent_tasks:
- type: search
weight: 1
params:
nq: 1000
top_k: 10
search_param:
nprobe: 64
timeout: 3600
random_data: true
``` | 1.0 | [Bug]: [benchmark][multi-replicas-loadbalance] The group NQ has a large merger at the beginning of search - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:master-20230707-b7740835
- Deployment mode(standalone or cluster):cluster
- MQ type(rocksmq, pulsar or kafka):pulsar
- SDK version(e.g. pymilvus v2.0.0rc2):pymilvus-2.2.13
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
### 1. At the beginning of the search, the group NQ has a large merger, which makes the search very slow;
### 2. but the maxRT of the client is smaller than the maxRT of the proxy
server argo task: fouramf-4t5bt
client argo task: vector-db-concurrent-778vj
server:
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
lb-helm-multi-proxy-etcd-0 1/1 Running 0 3d2h 10.104.19.46 4am-node28 <none> <none>
lb-helm-multi-proxy-etcd-1 1/1 Running 0 3d2h 10.104.22.211 4am-node26 <none> <none>
lb-helm-multi-proxy-etcd-2 1/1 Running 0 3d2h 10.104.16.124 4am-node21 <none> <none>
lb-helm-multi-proxy-milvus-datacoord-7dc68d5546-lvwq2 1/1 Running 0 3d2h 10.104.9.21 4am-node14 <none> <none>
lb-helm-multi-proxy-milvus-datanode-5fc4db6887-zz7pq 1/1 Running 0 3d2h 10.104.19.44 4am-node28 <none> <none>
lb-helm-multi-proxy-milvus-indexcoord-64b4bdd64f-mkmt9 1/1 Running 0 3d2h 10.104.22.207 4am-node26 <none> <none>
lb-helm-multi-proxy-milvus-indexnode-758748d697-lq67m 1/1 Running 0 3d2h 10.104.20.78 4am-node22 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-2f466 1/1 Running 0 3d2h 10.104.23.169 4am-node27 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-6dffw 1/1 Running 0 3d2h 10.104.24.63 4am-node29 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-6m278 1/1 Running 0 3d2h 10.104.19.43 4am-node28 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-lfwn7 1/1 Running 0 3d2h 10.104.20.75 4am-node22 <none> <none>
lb-helm-multi-proxy-milvus-proxy-9976677db-vpj26 1/1 Running 0 3d2h 10.104.22.208 4am-node26 <none> <none>
lb-helm-multi-proxy-milvus-querycoord-59d9f9765d-zl4zz 1/1 Running 0 3d2h 10.104.24.64 4am-node29 <none> <none>
lb-helm-multi-proxy-milvus-querynode-5b8bccd6b6-4886l 1/1 Running 0 3d2h 10.104.21.225 4am-node24 <none> <none>
lb-helm-multi-proxy-milvus-querynode-5b8bccd6b6-w488j 1/1 Running 0 3d2h 10.104.15.219 4am-node20 <none> <none>
lb-helm-multi-proxy-milvus-rootcoord-5d6cc775f8-dcxll 1/1 Running 0 3d2h 10.104.20.74 4am-node22 <none> <none>
lb-helm-multi-proxy-minio-0 1/1 Running 0 3d2h 10.104.16.119 4am-node21 <none> <none>
lb-helm-multi-proxy-minio-1 1/1 Running 0 3d2h 10.104.22.213 4am-node26 <none> <none>
lb-helm-multi-proxy-minio-2 1/1 Running 0 3d2h 10.104.1.185 4am-node10 <none> <none>
lb-helm-multi-proxy-minio-3 1/1 Running 0 3d2h 10.104.9.24 4am-node14 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-0 1/1 Running 0 3d2h 10.104.16.128 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-1 1/1 Running 0 3d2h 10.104.22.215 4am-node26 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-2 1/1 Running 0 3d2h 10.104.4.42 4am-node11 <none> <none>
lb-helm-multi-proxy-pulsar-bookie-init-ffzj2 0/1 Completed 0 3d2h 10.104.24.61 4am-node29 <none> <none>
lb-helm-multi-proxy-pulsar-broker-0 1/1 Running 0 3d2h 10.104.9.22 4am-node14 <none> <none>
lb-helm-multi-proxy-pulsar-proxy-0 1/1 Running 0 3d2h 10.104.24.62 4am-node29 <none> <none>
lb-helm-multi-proxy-pulsar-pulsar-init-x99kb 0/1 Completed 0 3d2h 10.104.21.224 4am-node24 <none> <none>
lb-helm-multi-proxy-pulsar-recovery-0 1/1 Running 0 3d2h 10.104.16.114 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-0 1/1 Running 0 3d2h 10.104.16.121 4am-node21 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-1 1/1 Running 0 3d2h 10.104.18.86 4am-node25 <none> <none>
lb-helm-multi-proxy-pulsar-zookeeper-2 1/1 Running 0 3d2h 10.104.15.221 4am-node20 <none> <none>
```
<img width="1867" alt="截屏2023-07-10 16 55 50" src="https://github.com/milvus-io/milvus/assets/26307815/51bbc69f-3d60-4570-80a7-c0b5c161ba53">
<img width="1866" alt="截屏2023-07-10 16 56 38" src="https://github.com/milvus-io/milvus/assets/26307815/f059db40-2be3-4307-889c-ba7da55eb6db">
<img width="1864" alt="截屏2023-07-10 16 56 55" src="https://github.com/milvus-io/milvus/assets/26307815/1c5285d4-6850-4773-80b4-42d5dd287e44">
<img width="1865" alt="截屏2023-07-10 16 57 15" src="https://github.com/milvus-io/milvus/assets/26307815/72b0017a-8501-4f75-91c7-6c613504d481">
client:
**Python multi-process concurrency**
```
[2023-07-10 03:56:29,257] - INFO: ---------------------------------------------- Concurrency task started! ----------------------------------------------- (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Parameters used:
{'database_params': {'metric_type': 'L2', 'dim': 128, 'max_length': 256},
'insert_params': {},
'connection_params': {'secure': False, 'port': 8443},
'collection_params': {'collection_name': 'fouram_VbFuv6W7'},
'index_params': {},
'load_params': {},
'search_params': {},
'concurrent_params': {'concurrent_number': 20,
'during_time': 1800,
'warm_time': 0,
'interval': 20},
'concurrent_tasks': [{'type': 'query',
'weight': 0,
'params': {'expr': 'id in [1, 10, 100, 1000]',
'timeout': 60}},
{'type': 'search',
'weight': 1,
'params': {'nq': 10,
'top_k': 10,
'search_param': {'nprobe': 64},
'timeout': 3600}}]} (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Get multiprocessing start method: forkserver (util_log.py:25)
[2023-07-10 03:56:29,299] - INFO: [MultiProcessConcurrent] Start initializing the concurrent pool (util_log.py:25)
[2023-07-10 03:56:51,305] - INFO: [MultiProcessConcurrent] Start waiting for 20 processes ready: 4.0s (util_log.py:25)
[2023-07-10 03:56:55,309] - INFO: [MultiProcessConcurrent] Start concurrent pool (util_log.py:25)
[2023-07-10 03:56:55,310] - INFO: [ParserResult] Starting sync report, interval:20s, intermediate state results are available for reference (util_log.py:25)
[2023-07-10 03:56:55,332] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:56:55,332] - INFO: --------------------------------------------------------------------------------------------------------------------- (util_log.py:25)
[2023-07-10 03:57:15,376] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:15,378] - INFO: search 1000 0(0.00%) | 397 309 479 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:57:35,431] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:35,432] - INFO: search 2000 0(0.00%) | 399 317 481 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:57:55,483] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:57:55,484] - INFO: search 3000 0(0.00%) | 399 316 484 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:15,521] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:15,522] - INFO: search 4000 0(0.00%) | 399 316 480 399 477 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:35,557] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:35,558] - INFO: search 5000 0(0.00%) | 399 321 476 400 474 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:58:55,606] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:58:55,608] - INFO: search 6000 0(0.00%) | 399 321 477 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:59:15,642] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:15,643] - INFO: search 7000 0(0.00%) | 399 318 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 03:59:35,693] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:35,693] - INFO: search 8020 0(0.00%) | 399 316 481 399 477 | 51.00 0.00 (util_log.py:25)
[2023-07-10 03:59:55,737] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 03:59:55,738] - INFO: search 9020 0(0.00%) | 398 319 477 398 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:15,771] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:15,772] - INFO: search 10020 0(0.00%) | 399 322 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:35,809] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:35,810] - INFO: search 11020 0(0.00%) | 399 322 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:00:55,857] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:00:55,858] - INFO: search 12020 0(0.00%) | 398 320 478 400 477 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:01:15,896] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:15,897] - INFO: search 13000 0(0.00%) | 407 316 745 399 725 | 49.00 0.00 (util_log.py:25)
[2023-07-10 04:01:35,936] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:35,937] - INFO: search 14000 0(0.00%) | 399 318 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:01:55,972] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:01:55,973] - INFO: search 15000 0(0.00%) | 399 318 484 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:16,008] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:16,009] - INFO: search 16000 0(0.00%) | 399 314 480 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:36,052] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:36,053] - INFO: search 17000 0(0.00%) | 399 315 481 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:02:56,093] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:02:56,094] - INFO: search 18020 0(0.00%) | 399 319 477 400 475 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:03:16,133] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:16,134] - INFO: search 19020 0(0.00%) | 398 318 482 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:03:36,164] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:36,164] - INFO: search 20020 0(0.00%) | 399 318 478 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:03:56,211] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:03:56,213] - INFO: search 21020 0(0.00%) | 399 317 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:16,265] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:16,266] - INFO: search 22020 0(0.00%) | 399 315 482 399 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:36,314] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:36,315] - INFO: search 23020 0(0.00%) | 399 316 479 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:04:56,354] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:04:56,355] - INFO: search 24020 0(0.00%) | 399 320 479 400 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:16,404] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:16,405] - INFO: search 25020 0(0.00%) | 399 318 478 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:36,455] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:36,456] - INFO: search 26020 0(0.00%) | 398 320 476 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:05:56,491] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:05:56,492] - INFO: search 27023 0(0.00%) | 398 320 478 399 474 | 50.15 0.00 (util_log.py:25)
[2023-07-10 04:06:16,541] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:16,542] - INFO: search 28040 0(0.00%) | 399 317 479 400 471 | 50.85 0.00 (util_log.py:25)
[2023-07-10 04:06:36,589] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:36,591] - INFO: search 29040 0(0.00%) | 400 323 477 399 476 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:06:56,624] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:06:56,624] - INFO: search 30028 0(0.00%) | 400 274 518 399 511 | 49.40 0.00 (util_log.py:25)
[2023-07-10 04:07:16,678] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:16,680] - INFO: search 31020 0(0.00%) | 401 375 560 399 553 | 49.60 0.00 (util_log.py:25)
[2023-07-10 04:07:36,728] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:36,729] - INFO: search 31988 0(0.00%) | 417 282 1224 399 1211 | 48.40 0.00 (util_log.py:25)
[2023-07-10 04:07:56,780] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:07:56,781] - INFO: search 32988 0(0.00%) | 399 374 419 398 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:16,815] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:16,816] - INFO: search 33988 0(0.00%) | 398 381 417 399 413 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:36,853] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:36,854] - INFO: search 34988 0(0.00%) | 399 379 422 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:08:56,904] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:08:56,905] - INFO: search 36000 0(0.00%) | 399 382 416 399 413 | 50.60 0.00 (util_log.py:25)
[2023-07-10 04:09:16,953] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:16,954] - INFO: search 37000 0(0.00%) | 399 377 419 399 414 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:09:36,988] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:36,989] - INFO: search 38000 0(0.00%) | 398 367 432 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:09:57,020] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:09:57,021] - INFO: search 39000 0(0.00%) | 399 377 421 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:10:17,061] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:17,062] - INFO: search 40000 0(0.00%) | 399 377 417 399 414 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:10:37,095] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:37,096] - INFO: search 41008 0(0.00%) | 399 380 420 399 414 | 50.40 0.00 (util_log.py:25)
[2023-07-10 04:10:57,129] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:10:57,130] - INFO: search 42008 0(0.00%) | 399 382 415 399 411 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:17,182] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:17,183] - INFO: search 43008 0(0.00%) | 399 374 423 399 416 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:37,228] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:37,229] - INFO: search 44008 0(0.00%) | 399 378 415 399 412 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:11:57,264] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:11:57,265] - INFO: search 45008 0(0.00%) | 399 378 421 399 415 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:12:17,315] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:17,316] - INFO: search 46020 0(0.00%) | 399 380 424 399 412 | 50.60 0.00 (util_log.py:25)
[2023-07-10 04:12:37,367] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:37,368] - INFO: search 47020 0(0.00%) | 399 375 423 399 418 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:12:57,406] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:12:57,407] - INFO: search 48020 0(0.00%) | 399 376 422 399 415 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:17,456] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:17,457] - INFO: search 49020 0(0.00%) | 400 330 552 399 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:37,501] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:37,501] - INFO: search 50020 0(0.00%) | 403 323 535 400 515 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:13:57,533] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:13:57,534] - INFO: search 51020 0(0.00%) | 399 319 481 399 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:17,566] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:17,567] - INFO: search 52020 0(0.00%) | 399 324 475 398 467 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:37,600] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:37,601] - INFO: search 53020 0(0.00%) | 399 322 473 400 470 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:14:57,631] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:14:57,632] - INFO: search 54020 0(0.00%) | 398 317 477 399 471 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:17,670] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:17,672] - INFO: search 55020 0(0.00%) | 399 317 479 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:37,711] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:37,712] - INFO: search 56020 0(0.00%) | 399 322 476 399 465 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:15:57,745] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:15:57,745] - INFO: search 57020 0(0.00%) | 399 325 475 399 465 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:17,795] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:17,796] - INFO: search 58020 0(0.00%) | 399 322 480 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:37,841] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:37,842] - INFO: search 59020 0(0.00%) | 399 326 479 398 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:16:57,895] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:16:57,896] - INFO: search 60031 0(0.00%) | 399 320 477 399 468 | 50.55 0.00 (util_log.py:25)
[2023-07-10 04:17:17,946] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:17,947] - INFO: search 61040 0(0.00%) | 399 323 477 398 466 | 50.45 0.00 (util_log.py:25)
[2023-07-10 04:17:37,988] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:37,990] - INFO: search 62040 0(0.00%) | 399 316 477 399 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:17:58,023] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:17:58,024] - INFO: search 63040 0(0.00%) | 398 321 476 399 470 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:18,076] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:18,078] - INFO: search 64040 0(0.00%) | 399 323 476 399 462 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:38,123] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:38,125] - INFO: search 65040 0(0.00%) | 398 323 476 400 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:18:58,170] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:18:58,171] - INFO: search 66040 0(0.00%) | 400 324 475 399 468 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:19:18,222] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:18,223] - INFO: search 67040 0(0.00%) | 398 324 475 400 463 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:19:38,268] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:38,270] - INFO: search 68041 0(0.00%) | 399 296 508 398 497 | 50.05 0.00 (util_log.py:25)
[2023-07-10 04:19:58,314] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:19:58,315] - INFO: search 69060 0(0.00%) | 398 319 477 399 474 | 50.95 0.00 (util_log.py:25)
[2023-07-10 04:20:18,364] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:18,365] - INFO: search 70060 0(0.00%) | 399 315 478 400 462 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:20:38,407] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:38,408] - INFO: search 71060 0(0.00%) | 399 319 479 400 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:20:58,450] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:20:58,451] - INFO: search 72060 0(0.00%) | 399 320 477 400 473 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:18,487] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:18,488] - INFO: search 73060 0(0.00%) | 399 327 476 399 469 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:38,535] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:38,536] - INFO: search 74060 0(0.00%) | 398 320 476 399 474 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:21:58,580] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:21:58,581] - INFO: search 75060 0(0.00%) | 398 323 476 398 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:18,620] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:18,621] - INFO: search 76060 0(0.00%) | 399 317 484 400 466 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:38,668] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:38,669] - INFO: search 77060 0(0.00%) | 399 322 479 399 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:22:58,715] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:22:58,717] - INFO: search 78080 0(0.00%) | 399 320 476 400 471 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:23:18,753] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:18,754] - INFO: search 79080 0(0.00%) | 398 323 476 399 457 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:23:38,782] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:38,783] - INFO: search 80080 0(0.00%) | 399 322 476 400 467 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:23:58,809] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:23:58,810] - INFO: search 81080 0(0.00%) | 399 324 476 399 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:18,843] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:18,844] - INFO: search 82080 0(0.00%) | 399 326 475 399 460 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:38,894] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:38,895] - INFO: search 83080 0(0.00%) | 399 321 477 400 475 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:24:59,150] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:24:59,151] - INFO: search 84080 0(0.00%) | 402 323 602 398 594 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:25:19,185] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:19,187] - INFO: search 85060 0(0.00%) | 407 321 740 400 726 | 49.00 0.00 (util_log.py:25)
[2023-07-10 04:25:39,219] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:39,221] - INFO: search 86060 0(0.00%) | 399 326 475 399 459 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:25:59,275] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:25:59,276] - INFO: search 87060 0(0.00%) | 399 321 476 399 469 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:26:19,318] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:19,320] - INFO: search 88080 0(0.00%) | 399 325 475 399 457 | 51.00 0.00 (util_log.py:25)
[2023-07-10 04:26:39,350] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:39,351] - INFO: search 89080 0(0.00%) | 399 320 478 400 472 | 50.00 0.00 (util_log.py:25)
[2023-07-10 04:26:55,668] - INFO: [MultiProcessConcurrent] End concurrent pool (util_log.py:25)
[2023-07-10 04:26:55,707] - INFO: ------------------------------------------------- Print final status ------------------------------------------------ (util_log.py:25)
[2023-07-10 04:26:55,707] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:55,724] - INFO: search 89900 0(0.00%) | 399 274 1224 399 474 | 49.94 0.00 (util_log.py:25)
[2023-07-10 04:26:55,724] - INFO: ------------------------ Print the status without start and end warmup time:0s as a reference ----------------------- (util_log.py:25)
[2023-07-10 04:26:55,725] - INFO: Name # reqs # fails | Avg Min Max Median TP99 | req/s failures/s (util_log.py:25)
[2023-07-10 04:26:55,739] - INFO: search 89900 0(0.00%) | 399 274 1224 399 474 | 49.94 0.00 (util_log.py:25)
[2023-07-10 04:26:55,739] - INFO: [ParserResult] Completed sync report (util_log.py:25)
[2023-07-10 04:26:55,950] - INFO: [MultiProcessConcurrent] Summary of overall results:
{'reqs': 89900,
'rps': 49.9351,
'total_time_s': 1800.3351,
'avg_time_ms': 399.6068,
'min_time_ms': 274.7896,
'max_time_ms': 1224.6462,
'median_time_ms': 399.6312,
'p95_time_ms': 457.3961,
'p99_time_ms': 474.754,
'sum_all_rt_ms': 35924648.072} (util_log.py:25)
[2023-07-10 04:26:57,871] - INFO: ---------------------------------------------- Concurrency task finished! ---------------------------------------------- (util_log.py:25)
```
### Expected Behavior
_No response_
### Steps To Reproduce
```markdown
1、deploy cluster Milvus with 5 proxys
2、prepare 5m data
3、concurrent search by vectordb-benchmark 《- python multi-process concurrency
```
### Milvus Log
_No response_
### Anything else?
fouramf-server-replicas-lb-2qn-5proxy:
```
extraConfigFiles:
user.yaml: |+
queryNode:
grouping:
maxNQ: 1000
proxy:
resources:
limits:
cpu: '4.0'
memory: 4Gi
requests:
cpu: '2.0'
memory: 2Gi
replicas: 5
queryNode:
resources:
limits:
cpu: '8.0'
memory: 8Gi
requests:
cpu: '4.0'
memory: 4Gi
replicas: 2
indexNode:
resources:
limits:
cpu: '4.0'
memory: 4Gi
requests:
cpu: '3.0'
memory: 3Gi
replicas: 1
dataNode:
resources:
limits:
cpu: '2.0'
memory: 2Gi
requests:
cpu: '2.0'
memory: 2Gi
```
fouramf-client-sift-ivfsq8-replica2-search:
```
load_params:
replica_number: 2
dataset_params:
dim: 128
dataset_name: sift
dataset_size: 5m
ni_per: 50000
metric_type: L2
index_params:
index_type: IVF_SQ8
index_param:
nlist: 2048
concurrent_params:
concurrent_number: 50
during_time: 2h
interval: 20
concurrent_tasks:
- type: search
weight: 1
params:
nq: 1000
top_k: 10
search_param:
nprobe: 64
timeout: 3600
random_data: true
``` | test | the group nq has a large merger at the beginning of search is there an existing issue for this i have searched the existing issues environment markdown milvus version master deployment mode standalone or cluster cluster mq type rocksmq pulsar or kafka pulsar sdk version e g pymilvus pymilvus os ubuntu or centos cpu memory gpu others current behavior at the beginning of the search the group nq has a large merger which makes the search very slow but the maxrt of the client is smaller than the maxrt of the proxy server argo task fouramf client argo task vector db concurrent server name ready status restarts age ip node nominated node readiness gates lb helm multi proxy etcd running lb helm multi proxy etcd running lb helm multi proxy etcd running lb helm multi proxy milvus datacoord running lb helm multi proxy milvus datanode running lb helm multi proxy milvus indexcoord running lb helm multi proxy milvus indexnode running lb helm multi proxy milvus proxy running lb helm multi proxy milvus proxy running lb helm multi proxy milvus proxy running lb helm multi proxy milvus proxy running lb helm multi proxy milvus proxy running lb helm multi proxy milvus querycoord running lb helm multi proxy milvus querynode running lb helm multi proxy milvus querynode running lb helm multi proxy milvus rootcoord dcxll running lb helm multi proxy minio running lb helm multi proxy minio running lb helm multi proxy minio running lb helm multi proxy minio running lb helm multi proxy pulsar bookie running lb helm multi proxy pulsar bookie running lb helm multi proxy pulsar bookie running lb helm multi proxy pulsar bookie init completed lb helm multi proxy pulsar broker running lb helm multi proxy pulsar proxy running lb helm multi proxy pulsar pulsar init completed lb helm multi proxy pulsar recovery running lb helm multi proxy pulsar zookeeper running lb helm multi proxy pulsar zookeeper running lb helm multi proxy pulsar zookeeper running img width alt src img width alt src img width alt src img width alt src client python multi process concurrency info concurrency task started util log py info parameters used database params metric type dim max length insert params connection params secure false port collection params collection name fouram index params load params search params concurrent params concurrent number during time warm time interval concurrent tasks type query weight params expr id in timeout type search weight params nq top k search param nprobe timeout util log py info get multiprocessing start method forkserver util log py info start initializing the concurrent pool util log py info start waiting for processes ready util log py info start concurrent pool util log py info starting sync report interval intermediate state results are available for reference util log py info name reqs fails avg min max median req s failures s util log py info util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info name reqs fails avg min max median req s failures s util log py info search util log py info end concurrent pool util log py info print final status util log py info name reqs fails avg min max median req s failures s util log py info search util log py info print the status without start and end warmup time as a reference util log py info name reqs fails avg min max median req s failures s util log py info search util log py info completed sync report util log py info summary of overall results reqs rps total time s avg time ms min time ms max time ms median time ms time ms time ms sum all rt ms util log py info concurrency task finished util log py expected behavior no response steps to reproduce markdown 、deploy cluster milvus with proxys 、prepare data 、concurrent search by vectordb benchmark 《 python multi process concurrency milvus log no response anything else fouramf server replicas lb extraconfigfiles user yaml querynode grouping maxnq proxy resources limits cpu memory requests cpu memory replicas querynode resources limits cpu memory requests cpu memory replicas indexnode resources limits cpu memory requests cpu memory replicas datanode resources limits cpu memory requests cpu memory fouramf client sift search load params replica number dataset params dim dataset name sift dataset size ni per metric type index params index type ivf index param nlist concurrent params concurrent number during time interval concurrent tasks type search weight params nq top k search param nprobe timeout random data true | 1 |
184,255 | 14,973,342,194 | IssuesEvent | 2021-01-28 00:51:08 | linz/building-detection | https://api.github.com/repos/linz/building-detection | closed | Setup repo issue templates | awaiting review documentation | #### Task
To ensure all issues (epics, sub-tasks and bugs) are created in consistent and complete manner issue templates must be created
#### Definition of Done
once issue templates are created for:
- [ ] Epics
- [ ] Sub-tasks
- [ ] Bugs
#### Out of Scope
only the above templates are in scope
#### Discussion required
Who is going to review these templates?
To get around the review issue for this task previous projects templates (gazetteer repo?) that have been previously reviewed could be used for this project
However review will be an ongoing project going forward.
I will open an issue for peer review as well as raise at the next topo-data standup
<!-- Add an _Assignee_, _Epics_, _Estimate_ and any relevant _Labels_ -->
| 1.0 | Setup repo issue templates - #### Task
To ensure all issues (epics, sub-tasks and bugs) are created in consistent and complete manner issue templates must be created
#### Definition of Done
once issue templates are created for:
- [ ] Epics
- [ ] Sub-tasks
- [ ] Bugs
#### Out of Scope
only the above templates are in scope
#### Discussion required
Who is going to review these templates?
To get around the review issue for this task previous projects templates (gazetteer repo?) that have been previously reviewed could be used for this project
However review will be an ongoing project going forward.
I will open an issue for peer review as well as raise at the next topo-data standup
<!-- Add an _Assignee_, _Epics_, _Estimate_ and any relevant _Labels_ -->
| non_test | setup repo issue templates task to ensure all issues epics sub tasks and bugs are created in consistent and complete manner issue templates must be created definition of done once issue templates are created for epics sub tasks bugs out of scope only the above templates are in scope discussion required who is going to review these templates to get around the review issue for this task previous projects templates gazetteer repo that have been previously reviewed could be used for this project however review will be an ongoing project going forward i will open an issue for peer review as well as raise at the next topo data standup | 0 |
165,248 | 6,265,939,527 | IssuesEvent | 2017-07-16 21:37:35 | loomio/loomio | https://api.github.com/repos/loomio/loomio | closed | Can't get to front page | Priority: Severe | I've signed out (which seemed to take a couple of tries). Now when I visit loomio.org I'm redirected to the dashboard with a sign-in modal:

| 1.0 | Can't get to front page - I've signed out (which seemed to take a couple of tries). Now when I visit loomio.org I'm redirected to the dashboard with a sign-in modal:

| non_test | can t get to front page i ve signed out which seemed to take a couple of tries now when i visit loomio org i m redirected to the dashboard with a sign in modal | 0 |
305,744 | 26,408,463,187 | IssuesEvent | 2023-01-13 10:07:49 | paradigmxyz/reth | https://api.github.com/repos/paradigmxyz/reth | closed | Tracking: Eth chain tests | C-tracking-issue C-test |
Run all chain tests from eth/test: https://github.com/ethereum/tests/tree/develop/BlockchainTests
This is one of two ways to check if a client is consistent with ethereum. (Second one is running on mainnet)
# Run test
- [x] Parse json files: https://github.com/foundry-rs/reth/pull/38
- [ ] Transfer json models to `primitives` and load pre-state to mocked database.
- [ ] Run on stages (maybe modify it if needed) and prepare checks (roots,hashes) that are going to be optional/disabled at first.
# Mocking
For running tests in stages we wouldn't need inmemory database that would allow us to do that.
- [ ] Define and do cleanup on talked db abstraction.
- [ ] Integrate abstraction inside stages.
- [ ] Mock Database/Trasanction interface with `BTreeMap`. | 1.0 | Tracking: Eth chain tests -
Run all chain tests from eth/test: https://github.com/ethereum/tests/tree/develop/BlockchainTests
This is one of two ways to check if a client is consistent with ethereum. (Second one is running on mainnet)
# Run test
- [x] Parse json files: https://github.com/foundry-rs/reth/pull/38
- [ ] Transfer json models to `primitives` and load pre-state to mocked database.
- [ ] Run on stages (maybe modify it if needed) and prepare checks (roots,hashes) that are going to be optional/disabled at first.
# Mocking
For running tests in stages we wouldn't need inmemory database that would allow us to do that.
- [ ] Define and do cleanup on talked db abstraction.
- [ ] Integrate abstraction inside stages.
- [ ] Mock Database/Trasanction interface with `BTreeMap`. | test | tracking eth chain tests run all chain tests from eth test this is one of two ways to check if a client is consistent with ethereum second one is running on mainnet run test parse json files transfer json models to primitives and load pre state to mocked database run on stages maybe modify it if needed and prepare checks roots hashes that are going to be optional disabled at first mocking for running tests in stages we wouldn t need inmemory database that would allow us to do that define and do cleanup on talked db abstraction integrate abstraction inside stages mock database trasanction interface with btreemap | 1 |
145,123 | 19,319,949,855 | IssuesEvent | 2021-12-14 03:32:47 | Sprinkle42/jenkins | https://api.github.com/repos/Sprinkle42/jenkins | opened | CVE-2021-3807 (High) detected in ansi-regex-5.0.0.tgz | security vulnerability | ## CVE-2021-3807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: jenkins/war/package.json</p>
<p>Path to vulnerable library: jenkins/war/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.1.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Sprinkle42/jenkins/commit/9a9fb6059028eaf0b29dacacd5d944a4af38d15c">9a9fb6059028eaf0b29dacacd5d944a4af38d15c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3807 (High) detected in ansi-regex-5.0.0.tgz - ## CVE-2021-3807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: jenkins/war/package.json</p>
<p>Path to vulnerable library: jenkins/war/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.1.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Sprinkle42/jenkins/commit/9a9fb6059028eaf0b29dacacd5d944a4af38d15c">9a9fb6059028eaf0b29dacacd5d944a4af38d15c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in ansi regex tgz cve high severity vulnerability vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file jenkins war package json path to vulnerable library jenkins war node modules ansi regex package json dependency hierarchy eslint tgz root library strip ansi tgz x ansi regex tgz vulnerable library found in head commit a href found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex step up your open source security game with whitesource | 0 |
825,142 | 31,274,929,431 | IssuesEvent | 2023-08-22 05:15:10 | CodeWithAloha/Hawaii-Zoning-Atlas | https://api.github.com/repos/CodeWithAloha/Hawaii-Zoning-Atlas | opened | 💡 [REQUEST] - Minify geojson files | enhancement :magic_wand: good first issue :handshake: dev :gorilla: high priority 🫡 perf :runner: | ### Start Date
8/21/23
### Implementation PR
_No response_
### Reference Issues
_No response_
### Summary
Several of the geojson files we use are needlessly detailed and slowing down performance. There are two basic ways to reduce a spatial file:
1) [Dissolve](https://geopandas.org/en/stable/docs/user_guide/aggregation_with_dissolve.html) many polygons into fewer polygons (sometimes just one), eliminating internal lines.
2) [Simplify lines](https://bost.ocks.org/mike/simplify/) by removing points that don't make a discernable difference at our map's scale. In some cases that can be 90% or more!
In the examples section I list several geojson files that could benefit from one or both of these treatments. For dissolve, you can use built-in geopandas (Python, see link above). For simplifying lines, I recommend Visvalingam’s algorithm, which is not implemented in geopandas, but is in this handy online interface: https://mapshaper.org/
### Basic Example
- [ ] data/counties.geojson (4 MB) - simplify lines
- [ ] data/dhhl-land-min.geojson (4 MB) - dissolve to 1 polygon and simplify lines
- [ ] data/hydro.min.geojson (11 MB) - dissolve to 1 polygon and simplify lines
- [ ] data/rail-transit-line.geojson (2.6 MB) - dissolve to 1 polygon and simplify lines
I think it's possible to reduce files <= 4 MB down to 250 KB or less, and the hydro file to < 1 MB with minimal loss of visual detail.
### Drawbacks
We may lose a bit of visual detail, but in theory the loading time and lagginess would be cut by 40%--well worth the tradeoff.
### Unresolved questions
We are trying to preserve final.geojson at maximum detail, but it is the biggest file (23 MB). If performance problems persist, we may need to reduce that as well. | 1.0 | 💡 [REQUEST] - Minify geojson files - ### Start Date
8/21/23
### Implementation PR
_No response_
### Reference Issues
_No response_
### Summary
Several of the geojson files we use are needlessly detailed and slowing down performance. There are two basic ways to reduce a spatial file:
1) [Dissolve](https://geopandas.org/en/stable/docs/user_guide/aggregation_with_dissolve.html) many polygons into fewer polygons (sometimes just one), eliminating internal lines.
2) [Simplify lines](https://bost.ocks.org/mike/simplify/) by removing points that don't make a discernable difference at our map's scale. In some cases that can be 90% or more!
In the examples section I list several geojson files that could benefit from one or both of these treatments. For dissolve, you can use built-in geopandas (Python, see link above). For simplifying lines, I recommend Visvalingam’s algorithm, which is not implemented in geopandas, but is in this handy online interface: https://mapshaper.org/
### Basic Example
- [ ] data/counties.geojson (4 MB) - simplify lines
- [ ] data/dhhl-land-min.geojson (4 MB) - dissolve to 1 polygon and simplify lines
- [ ] data/hydro.min.geojson (11 MB) - dissolve to 1 polygon and simplify lines
- [ ] data/rail-transit-line.geojson (2.6 MB) - dissolve to 1 polygon and simplify lines
I think it's possible to reduce files <= 4 MB down to 250 KB or less, and the hydro file to < 1 MB with minimal loss of visual detail.
### Drawbacks
We may lose a bit of visual detail, but in theory the loading time and lagginess would be cut by 40%--well worth the tradeoff.
### Unresolved questions
We are trying to preserve final.geojson at maximum detail, but it is the biggest file (23 MB). If performance problems persist, we may need to reduce that as well. | non_test | 💡 minify geojson files start date implementation pr no response reference issues no response summary several of the geojson files we use are needlessly detailed and slowing down performance there are two basic ways to reduce a spatial file many polygons into fewer polygons sometimes just one eliminating internal lines by removing points that don t make a discernable difference at our map s scale in some cases that can be or more in the examples section i list several geojson files that could benefit from one or both of these treatments for dissolve you can use built in geopandas python see link above for simplifying lines i recommend visvalingam’s algorithm which is not implemented in geopandas but is in this handy online interface basic example data counties geojson mb simplify lines data dhhl land min geojson mb dissolve to polygon and simplify lines data hydro min geojson mb dissolve to polygon and simplify lines data rail transit line geojson mb dissolve to polygon and simplify lines i think it s possible to reduce files mb down to kb or less and the hydro file to mb with minimal loss of visual detail drawbacks we may lose a bit of visual detail but in theory the loading time and lagginess would be cut by well worth the tradeoff unresolved questions we are trying to preserve final geojson at maximum detail but it is the biggest file mb if performance problems persist we may need to reduce that as well | 0 |
179,350 | 21,566,745,470 | IssuesEvent | 2022-05-02 00:03:12 | drakeg/udemy_django_vue | https://api.github.com/repos/drakeg/udemy_django_vue | closed | CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - autoclosed | security vulnerability | ## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>
Dependency Hierarchy:
<p>Found in HEAD commit: <a href="https://github.com/drakeg/udemy_django_vue/commit/4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8">4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: 1.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - autoclosed - ## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>
Dependency Hierarchy:
<p>Found in HEAD commit: <a href="https://github.com/drakeg/udemy_django_vue/commit/4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8">4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: 1.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in path parse tgz autoclosed cve high severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href dependency hierarchy found in head commit a href found in base branch master vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
22,757 | 11,782,567,738 | IssuesEvent | 2020-03-17 02:24:34 | mozilla/bugbug | https://api.github.com/repos/mozilla/bugbug | closed | Perform HTTP service's workers setup steps in parallel | enhancement http_service | We currently clone mozilla-central and download one DB after the other, we can probably parallelize some of these steps to make the setup faster (see also https://github.com/mozilla/bugbug/pull/1320#discussion_r385211797). | 1.0 | Perform HTTP service's workers setup steps in parallel - We currently clone mozilla-central and download one DB after the other, we can probably parallelize some of these steps to make the setup faster (see also https://github.com/mozilla/bugbug/pull/1320#discussion_r385211797). | non_test | perform http service s workers setup steps in parallel we currently clone mozilla central and download one db after the other we can probably parallelize some of these steps to make the setup faster see also | 0 |
177,903 | 13,752,566,640 | IssuesEvent | 2020-10-06 14:39:50 | CityofToronto/bdit_flashcrow | https://api.github.com/repos/CityofToronto/bdit_flashcrow | opened | Improve unit, DB, and API test coverage | dev feature tests | **Description**
[CI / CD pipeline 170](https://gitlab.bdit.intra.prod-toronto.ca/move-team/bdit_flashcrow/-/jobs/170) failed due to not meeting coverage targets. This task aims to improve test coverage in advance of launch.
**Acceptance Criteria**
- [ ] branch coverage is above 62%;
- [ ] `lib/controller` coverage targets are set to at least 50%, and are met;
- [ ] `lib/db` coverage is extended where possible given remaining time.
**Additional Notes**
This will likely be done in the course of completing other tasks, with a larger sweep towards the end of pre-beta work.
| 1.0 | Improve unit, DB, and API test coverage - **Description**
[CI / CD pipeline 170](https://gitlab.bdit.intra.prod-toronto.ca/move-team/bdit_flashcrow/-/jobs/170) failed due to not meeting coverage targets. This task aims to improve test coverage in advance of launch.
**Acceptance Criteria**
- [ ] branch coverage is above 62%;
- [ ] `lib/controller` coverage targets are set to at least 50%, and are met;
- [ ] `lib/db` coverage is extended where possible given remaining time.
**Additional Notes**
This will likely be done in the course of completing other tasks, with a larger sweep towards the end of pre-beta work.
| test | improve unit db and api test coverage description failed due to not meeting coverage targets this task aims to improve test coverage in advance of launch acceptance criteria branch coverage is above lib controller coverage targets are set to at least and are met lib db coverage is extended where possible given remaining time additional notes this will likely be done in the course of completing other tasks with a larger sweep towards the end of pre beta work | 1 |
352,614 | 10,544,152,525 | IssuesEvent | 2019-10-02 16:19:00 | AY1920S1-CS2103T-T11-4/main | https://api.github.com/repos/AY1920S1-CS2103T-T11-4/main | closed | Update About Us Page | priority.High type.Task | From TP week 7 :
About Us page: This page is used for module admin purposes. Please follow the format closely or else our scripts will not be able to give credit for your work.
Replace info of SE-EDU developers with info of your team. Include a suitable photo as described here.
- Including the name/photo of the supervisor/lecturer is optional.
- The filename of the profile photo (even a placeholder image) should be doc/images/githbub_username_in_lower_case.png e.g. docs/images/damithc.png. If you photo is in jpg format, name the file as .png anyway.
- Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary. | 1.0 | Update About Us Page - From TP week 7 :
About Us page: This page is used for module admin purposes. Please follow the format closely or else our scripts will not be able to give credit for your work.
Replace info of SE-EDU developers with info of your team. Include a suitable photo as described here.
- Including the name/photo of the supervisor/lecturer is optional.
- The filename of the profile photo (even a placeholder image) should be doc/images/githbub_username_in_lower_case.png e.g. docs/images/damithc.png. If you photo is in jpg format, name the file as .png anyway.
- Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary. | non_test | update about us page from tp week about us page this page is used for module admin purposes please follow the format closely or else our scripts will not be able to give credit for your work replace info of se edu developers with info of your team include a suitable photo as described here including the name photo of the supervisor lecturer is optional the filename of the profile photo even a placeholder image should be doc images githbub username in lower case png e g docs images damithc png if you photo is in jpg format name the file as png anyway indicate the different roles played and responsibilities held by each team member you can reassign these roles and responsibilities as explained in admin project scope later in the project if necessary | 0 |
160,134 | 20,099,611,464 | IssuesEvent | 2022-02-07 01:14:40 | Rossb0b/Custom_Agile_ToDo_frontend | https://api.github.com/repos/Rossb0b/Custom_Agile_ToDo_frontend | closed | CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - autoclosed | security vulnerability | ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-4.1.0.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - autoclosed - ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-4.1.0.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in xmlhttprequest ssl tgz autoclosed cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file package json path to vulnerable library node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl step up your open source security game with whitesource | 0 |
104,747 | 9,007,711,853 | IssuesEvent | 2019-02-05 00:07:18 | Automattic/themes | https://api.github.com/repos/Automattic/themes | closed | Syntax: Custom background image does not render in Chrome on (Samsung) Android devices | [Pri] Low bug needs testing support wontfix | ## Steps to replicate
The user reported that their custom background image (set in **Customizer** > **Colors & Backgrounds**) does not display when they load their site on their Android devices, specifically their Galaxy S6 and Galaxy Tab A devices.
I saw from the site's source that the background image loads from here, around line 130:
```html
<style type="text/css" id="custom-background-css">
body.custom-background { background-image: url("https://redacted.files.wordpress.com/2018/09/redacted-image-name.png"); background-position: left top; background-size: auto; background-repeat: repeat; background-attachment: scroll; }
</style>
```
I didn't see any CSS rules that should block the background image from loading on mobile devices (I ran keyword searches to check for rules).
I loaded the user's site in Chrome on my Samsung Galaxy A7, running Android 8.0.0, and was able to replicate the user's issue:

By contrast, when I loaded the user's site in Firefox on the same device, the background image rendered:

## Result
The custom background image _didn't_ load when I loaded the site in Chrome for Android, on my Samsung Galaxy A7 device, running Android 8.0.0. It _did_, however, load when I used Firefox.
## Expected
The custom background image should load consistently in Chrome and Firefox browsers.
* User report: 1386233-zen
* User's site: https://speakwritetruth.wordpress.com/
| 1.0 | Syntax: Custom background image does not render in Chrome on (Samsung) Android devices - ## Steps to replicate
The user reported that their custom background image (set in **Customizer** > **Colors & Backgrounds**) does not display when they load their site on their Android devices, specifically their Galaxy S6 and Galaxy Tab A devices.
I saw from the site's source that the background image loads from here, around line 130:
```html
<style type="text/css" id="custom-background-css">
body.custom-background { background-image: url("https://redacted.files.wordpress.com/2018/09/redacted-image-name.png"); background-position: left top; background-size: auto; background-repeat: repeat; background-attachment: scroll; }
</style>
```
I didn't see any CSS rules that should block the background image from loading on mobile devices (I ran keyword searches to check for rules).
I loaded the user's site in Chrome on my Samsung Galaxy A7, running Android 8.0.0, and was able to replicate the user's issue:

By contrast, when I loaded the user's site in Firefox on the same device, the background image rendered:

## Result
The custom background image _didn't_ load when I loaded the site in Chrome for Android, on my Samsung Galaxy A7 device, running Android 8.0.0. It _did_, however, load when I used Firefox.
## Expected
The custom background image should load consistently in Chrome and Firefox browsers.
* User report: 1386233-zen
* User's site: https://speakwritetruth.wordpress.com/
| test | syntax custom background image does not render in chrome on samsung android devices steps to replicate the user reported that their custom background image set in customizer colors backgrounds does not display when they load their site on their android devices specifically their galaxy and galaxy tab a devices i saw from the site s source that the background image loads from here around line html body custom background background image url background position left top background size auto background repeat repeat background attachment scroll i didn t see any css rules that should block the background image from loading on mobile devices i ran keyword searches to check for rules i loaded the user s site in chrome on my samsung galaxy running android and was able to replicate the user s issue by contrast when i loaded the user s site in firefox on the same device the background image rendered result the custom background image didn t load when i loaded the site in chrome for android on my samsung galaxy device running android it did however load when i used firefox expected the custom background image should load consistently in chrome and firefox browsers user report zen user s site | 1 |
126,415 | 4,995,173,348 | IssuesEvent | 2016-12-09 09:11:11 | Jumpscale/jscockpit | https://api.github.com/repos/Jumpscale/jscockpit | opened | Cockpit in private network | priority_major type_feature | It is currently not possible to deploy a cockpit in a non public environment due to the caddy / letsencrypt setup.
This is not acceptable.
Customers will not always want to host their cockpits on public reachable networks.
Also for our own validation, we need to be able to deploy cockpits on internal networks. | 1.0 | Cockpit in private network - It is currently not possible to deploy a cockpit in a non public environment due to the caddy / letsencrypt setup.
This is not acceptable.
Customers will not always want to host their cockpits on public reachable networks.
Also for our own validation, we need to be able to deploy cockpits on internal networks. | non_test | cockpit in private network it is currently not possible to deploy a cockpit in a non public environment due to the caddy letsencrypt setup this is not acceptable customers will not always want to host their cockpits on public reachable networks also for our own validation we need to be able to deploy cockpits on internal networks | 0 |
339,872 | 30,481,920,357 | IssuesEvent | 2023-07-17 21:06:14 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Engine has very limited testing | a: tests team engine P3 team-engine | We should test more things on the engine bots, such as (in order of how easy they would be to add, easiest first):
- [x] all possible combinations of builds, including e.g. `android_profile_unopt` which just broke without the bots noticing (https://github.com/dart-lang/sdk/issues/27862)
- [x] we should verify that the host test binary runs without crashing
- [x] test some basic framework tests (can't test anything too complicated, because otherwise we would not be able to change APIs since it would break the engine tests where they got out of sync)
- [x] Skia pixel tests for parts of the Canvas API | 1.0 | Engine has very limited testing - We should test more things on the engine bots, such as (in order of how easy they would be to add, easiest first):
- [x] all possible combinations of builds, including e.g. `android_profile_unopt` which just broke without the bots noticing (https://github.com/dart-lang/sdk/issues/27862)
- [x] we should verify that the host test binary runs without crashing
- [x] test some basic framework tests (can't test anything too complicated, because otherwise we would not be able to change APIs since it would break the engine tests where they got out of sync)
- [x] Skia pixel tests for parts of the Canvas API | test | engine has very limited testing we should test more things on the engine bots such as in order of how easy they would be to add easiest first all possible combinations of builds including e g android profile unopt which just broke without the bots noticing we should verify that the host test binary runs without crashing test some basic framework tests can t test anything too complicated because otherwise we would not be able to change apis since it would break the engine tests where they got out of sync skia pixel tests for parts of the canvas api | 1 |
202,019 | 15,251,786,413 | IssuesEvent | 2021-02-20 00:26:25 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [REMOTO ] – Engenheira de Software Senior @Kenoby (Vaga para Mulheres) | AWS CLT JavaScript Stale Testes automatizados | A cada 5 profissionais da área de TI, 1 é mulher, e apenas 12% dos postos de chefia do setor são ocupados por essas mulheres, é o que apresenta a pesquisa CIO 2019, realizada pela KPMG em conjunto com a Harvey Nash.
A Kenoby incentiva a inserção de mulheres no mercado da tecnologia e é uma empresa que vem revolucionando o mercado de recrutamento. Foi escolhida pela Endeavor como uma das melhores 16 scale up SAAS e atualmente recebeu um investimento de 20 milhões do fundo Astela, sendo uma das startups que mais vem crescendo no Brasil.
Buscamos mulheres que tenham propósitos, sejam proativas, mão na massa e que tenham vontade de aprender e crescer exponencialmente.
Para essa oportunidade são necessários conhecimentos em Javascript, AWS, testes automatizados, integração contínua, e entre outros.
Contratação CLT
Gostou da oportunidade e topa esse desafio? Se inscreva no link abaixo:
https://byintera.in/1s6
| 1.0 | [REMOTO ] – Engenheira de Software Senior @Kenoby (Vaga para Mulheres) - A cada 5 profissionais da área de TI, 1 é mulher, e apenas 12% dos postos de chefia do setor são ocupados por essas mulheres, é o que apresenta a pesquisa CIO 2019, realizada pela KPMG em conjunto com a Harvey Nash.
A Kenoby incentiva a inserção de mulheres no mercado da tecnologia e é uma empresa que vem revolucionando o mercado de recrutamento. Foi escolhida pela Endeavor como uma das melhores 16 scale up SAAS e atualmente recebeu um investimento de 20 milhões do fundo Astela, sendo uma das startups que mais vem crescendo no Brasil.
Buscamos mulheres que tenham propósitos, sejam proativas, mão na massa e que tenham vontade de aprender e crescer exponencialmente.
Para essa oportunidade são necessários conhecimentos em Javascript, AWS, testes automatizados, integração contínua, e entre outros.
Contratação CLT
Gostou da oportunidade e topa esse desafio? Se inscreva no link abaixo:
https://byintera.in/1s6
| test | – engenheira de software senior kenoby vaga para mulheres a cada profissionais da área de ti é mulher e apenas dos postos de chefia do setor são ocupados por essas mulheres é o que apresenta a pesquisa cio realizada pela kpmg em conjunto com a harvey nash a kenoby incentiva a inserção de mulheres no mercado da tecnologia e é uma empresa que vem revolucionando o mercado de recrutamento foi escolhida pela endeavor como uma das melhores scale up saas e atualmente recebeu um investimento de milhões do fundo astela sendo uma das startups que mais vem crescendo no brasil buscamos mulheres que tenham propósitos sejam proativas mão na massa e que tenham vontade de aprender e crescer exponencialmente para essa oportunidade são necessários conhecimentos em javascript aws testes automatizados integração contínua e entre outros contratação clt gostou da oportunidade e topa esse desafio se inscreva no link abaixo | 1 |
106,449 | 9,134,785,527 | IssuesEvent | 2019-02-26 01:19:25 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Backend should not allow us to create MC app with multiple of same projectid | area/api kind/bug-qa status/reopened status/resolved status/to-test status/working team/ca version/2.0 | API for #18241
Backend should not allow us to create a MC app with multiple targets of same projectid | 1.0 | Backend should not allow us to create MC app with multiple of same projectid - API for #18241
Backend should not allow us to create a MC app with multiple targets of same projectid | test | backend should not allow us to create mc app with multiple of same projectid api for backend should not allow us to create a mc app with multiple targets of same projectid | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.