Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,909 | 5,705,517,578 | IssuesEvent | 2017-04-18 08:45:49 | orbisgis/orbisgis | https://api.github.com/repos/orbisgis/orbisgis | closed | Wiki for the WPS process creation | Documentation Orbistoolbox WPS | Explain in the wiki how to build its own WPS script with the annotation, example ...
| 1.0 | Wiki for the WPS process creation - Explain in the wiki how to build its own WPS script with the annotation, example ...
| non_test | wiki for the wps process creation explain in the wiki how to build its own wps script with the annotation example | 0 |
24,344 | 11,031,190,574 | IssuesEvent | 2019-12-06 17:12:33 | brunobuzzi/BpmFlow | https://api.github.com/repos/brunobuzzi/BpmFlow | closed | Improve script analyzer for BpmScripts | enhancement security | The problem now is that a script can send message to any class then at runtime the execution of a script will be only limited by the security policy at GemStone level.
There should be a way to define valid and invalid classes to have more control of scripts execution.
Maybe also valid and invalid selectors. | True | Improve script analyzer for BpmScripts - The problem now is that a script can send message to any class then at runtime the execution of a script will be only limited by the security policy at GemStone level.
There should be a way to define valid and invalid classes to have more control of scripts execution.
Maybe also valid and invalid selectors. | non_test | improve script analyzer for bpmscripts the problem now is that a script can send message to any class then at runtime the execution of a script will be only limited by the security policy at gemstone level there should be a way to define valid and invalid classes to have more control of scripts execution maybe also valid and invalid selectors | 0 |
326,355 | 27,986,153,379 | IssuesEvent | 2023-03-26 18:13:17 | Sars9588/mywebclass-simulation | https://api.github.com/repos/Sars9588/mywebclass-simulation | closed | Testing clicking Privacy Policy Button in navigation bar | Test | Name of Test Developer: Meet
Test Name: clicking Privacy Policy Button in the navigation bar on the home page
Test Type: Button Click
| 1.0 | Testing clicking Privacy Policy Button in navigation bar - Name of Test Developer: Meet
Test Name: clicking Privacy Policy Button in the navigation bar on the home page
Test Type: Button Click
| test | testing clicking privacy policy button in navigation bar name of test developer meet test name clicking privacy policy button in the navigation bar on the home page test type button click | 1 |
197,067 | 14,907,005,596 | IssuesEvent | 2021-01-22 02:02:22 | gladiatorsprogramming1591/InfiniteRecharge2020-PBot | https://api.github.com/repos/gladiatorsprogramming1591/InfiniteRecharge2020-PBot | closed | Make sure migration retained all robot functionality. | testing | Test the code from e6c5af7af73cc670943f71f7a5f838027e767a31 to make sure it is the same as 15419c0fc4bee5ee55ce0a58bc204429e94149d4 (last commit on 2020 WPILib). | 1.0 | Make sure migration retained all robot functionality. - Test the code from e6c5af7af73cc670943f71f7a5f838027e767a31 to make sure it is the same as 15419c0fc4bee5ee55ce0a58bc204429e94149d4 (last commit on 2020 WPILib). | test | make sure migration retained all robot functionality test the code from to make sure it is the same as last commit on wpilib | 1 |
113,089 | 11,787,522,438 | IssuesEvent | 2020-03-17 14:10:28 | nest/nest-simulator | https://api.github.com/repos/nest/nest-simulator | opened | Create template for documentation issues | C: Documentation I: No breaking change P: Pending S: Low T: Enhancement | **Is your feature request related to a problem? Please describe.**
On https://github.com/nest/nest-simulator/issues/new/choose we currently have templates for bugs, feature requests and vulnerabilities, but not for errors or weaknesses related to documentation. This is problematic, since the other templates do not fit documentation very well.
**Describe the solution you'd like**
Create a template tailored to documentation-related issues.
| 1.0 | Create template for documentation issues - **Is your feature request related to a problem? Please describe.**
On https://github.com/nest/nest-simulator/issues/new/choose we currently have templates for bugs, feature requests and vulnerabilities, but not for errors or weaknesses related to documentation. This is problematic, since the other templates do not fit documentation very well.
**Describe the solution you'd like**
Create a template tailored to documentation-related issues.
| non_test | create template for documentation issues is your feature request related to a problem please describe on we currently have templates for bugs feature requests and vulnerabilities but not for errors or weaknesses related to documentation this is problematic since the other templates do not fit documentation very well describe the solution you d like create a template tailored to documentation related issues | 0 |
11,324 | 16,984,358,566 | IssuesEvent | 2021-06-30 12:51:14 | streamlink/streamlink | https://api.github.com/repos/streamlink/streamlink | closed | Streams kept buffering and ending on VLC for no reason | does not meet requirements more info required question | <!--
Thanks for reporting a bug!
USE THE TEMPLATE. Otherwise your bug report may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Bugs are the result of broken functionality within Streamlink's main code base. Use the plugin issue template if your report is about a broken plugin.
Also check the list of open and closed bug reports:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Bug Report
<!-- Replace the space character between the square brackets with an x in order to check the boxes -->
- [ ] This is a bug report and I have read the contribution guidelines.
- [ ] I am using the latest development version from the master branch.
### Description
Most of the streams I'm trying to watch through VLC kept lagging and closing for no reason, and this isn't just with the plugin, but also in the Twitch GUI.
### Expected / Actual behavior
I expect the streams to play normally in VLC. But they've been lagging a lot and then they stopped playing and ended, and it's very annoying to no end.
### Reproduction steps / Explicit stream URLs to test
<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->
1. ...
2. ...
3. ...
### Log output
<!--
DEBUG LOG OUTPUT IS REQUIRED for a bug report!
INCLUDE THE ENTIRE COMMAND LINE and make sure to **remove usernames and passwords**
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
Debug log includes important details about your platform. Don't remove it.
https://streamlink.github.io/latest/cli.html#cmdoption-loglevel
You can copy the output to https://gist.github.com/ or paste it below.
Don't post screenshots of the log output and instead copy the text from your terminal application.
-->
```
REPLACE THIS TEXT WITH THE LOG OUTPUT
All log output should go between two blocks of triple backticks (grave accents) for proper formatting.
```
### Additional comments, etc.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
| 1.0 | Streams kept buffering and ending on VLC for no reason - <!--
Thanks for reporting a bug!
USE THE TEMPLATE. Otherwise your bug report may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Bugs are the result of broken functionality within Streamlink's main code base. Use the plugin issue template if your report is about a broken plugin.
Also check the list of open and closed bug reports:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Bug Report
<!-- Replace the space character between the square brackets with an x in order to check the boxes -->
- [ ] This is a bug report and I have read the contribution guidelines.
- [ ] I am using the latest development version from the master branch.
### Description
Most of the streams I'm trying to watch through VLC kept lagging and closing for no reason, and this isn't just with the plugin, but also in the Twitch GUI.
### Expected / Actual behavior
I expect the streams to play normally in VLC. But they've been lagging a lot and then they stopped playing and ended, and it's very annoying to no end.
### Reproduction steps / Explicit stream URLs to test
<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->
1. ...
2. ...
3. ...
### Log output
<!--
DEBUG LOG OUTPUT IS REQUIRED for a bug report!
INCLUDE THE ENTIRE COMMAND LINE and make sure to **remove usernames and passwords**
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
Debug log includes important details about your platform. Don't remove it.
https://streamlink.github.io/latest/cli.html#cmdoption-loglevel
You can copy the output to https://gist.github.com/ or paste it below.
Don't post screenshots of the log output and instead copy the text from your terminal application.
-->
```
REPLACE THIS TEXT WITH THE LOG OUTPUT
All log output should go between two blocks of triple backticks (grave accents) for proper formatting.
```
### Additional comments, etc.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
| non_test | streams kept buffering and ending on vlc for no reason thanks for reporting a bug use the template otherwise your bug report may be rejected first see the contribution guidelines bugs are the result of broken functionality within streamlink s main code base use the plugin issue template if your report is about a broken plugin also check the list of open and closed bug reports please see the text preview to avoid unnecessary formatting errors bug report this is a bug report and i have read the contribution guidelines i am using the latest development version from the master branch description most of the streams i m trying to watch through vlc kept lagging and closing for no reason and this isn t just with the plugin but also in the twitch gui expected actual behavior i expect the streams to play normally in vlc but they ve been lagging a lot and then they stopped playing and ended and it s very annoying to no end reproduction steps explicit stream urls to test log output debug log output is required for a bug report include the entire command line and make sure to remove usernames and passwords use the loglevel debug parameter and avoid using parameters which suppress log output debug log includes important details about your platform don t remove it you can copy the output to or paste it below don t post screenshots of the log output and instead copy the text from your terminal application replace this text with the log output all log output should go between two blocks of triple backticks grave accents for proper formatting additional comments etc | 0 |
28,334 | 4,387,805,826 | IssuesEvent | 2016-08-08 16:53:08 | red/red | https://api.github.com/repos/red/red | closed | "Internal error: stack overflow" when parsing big string data | status.built status.tested type.bug | Using xml parser from https://github.com/Zamlox/red-tools/blob/master/xml/xml.red on file http://filebin.ca/2qrYaePtAywP/test.xml will give following error:
```
*** Internal Error: stack overflow
*** Where: copy
```
Sequence to reproduce:
````
content: read %test.xml
probe xml/to-block content
``` | 1.0 | "Internal error: stack overflow" when parsing big string data - Using xml parser from https://github.com/Zamlox/red-tools/blob/master/xml/xml.red on file http://filebin.ca/2qrYaePtAywP/test.xml will give following error:
```
*** Internal Error: stack overflow
*** Where: copy
```
Sequence to reproduce:
````
content: read %test.xml
probe xml/to-block content
``` | test | internal error stack overflow when parsing big string data using xml parser from on file will give following error internal error stack overflow where copy sequence to reproduce content read test xml probe xml to block content | 1 |
139,811 | 11,286,359,089 | IssuesEvent | 2020-01-16 00:22:00 | eventespresso/event-espresso-core | https://api.github.com/repos/eventespresso/event-espresso-core | closed | Write Tests for Application Hooks | EDTR Prototype category:unit-tests | - [ ] application/hooks/useIfMounted
- [ ] application/hooks/useTimeZoneTime | 1.0 | Write Tests for Application Hooks - - [ ] application/hooks/useIfMounted
- [ ] application/hooks/useTimeZoneTime | test | write tests for application hooks application hooks useifmounted application hooks usetimezonetime | 1 |
23,441 | 7,329,998,637 | IssuesEvent | 2018-03-05 08:17:17 | moment/moment | https://api.github.com/repos/moment/moment | closed | Missing files when installing with JSPM 0.17.0-beta.14 | Build/Release | When installing Moment.js via JSPM v0.17.0-beta.14 by `jspm install moment` I miss the following files in the downloaded package:
```
moment.d.ts
package.json
```
and the `min` folder's empty.
I suggest to add these files to `package.json`:
```
"jspm": {
"buildConfig": {
"uglify": true
},
"files": [
"package.json",
"moment.js",
"moment.d.ts",
"locale",
"min"
],
"map": {
"moment": "./moment"
}
},
```
Thanks!
P.S.: PR?
| 1.0 | Missing files when installing with JSPM 0.17.0-beta.14 - When installing Moment.js via JSPM v0.17.0-beta.14 by `jspm install moment` I miss the following files in the downloaded package:
```
moment.d.ts
package.json
```
and the `min` folder's empty.
I suggest to add these files to `package.json`:
```
"jspm": {
"buildConfig": {
"uglify": true
},
"files": [
"package.json",
"moment.js",
"moment.d.ts",
"locale",
"min"
],
"map": {
"moment": "./moment"
}
},
```
Thanks!
P.S.: PR?
| non_test | missing files when installing with jspm beta when installing moment js via jspm beta by jspm install moment i miss the following files in the downloaded package moment d ts package json and the min folder s empty i suggest to add these files to package json jspm buildconfig uglify true files package json moment js moment d ts locale min map moment moment thanks p s pr | 0 |
224,053 | 17,657,824,304 | IssuesEvent | 2021-08-21 00:05:31 | nasa/cFE | https://api.github.com/repos/nasa/cFE | closed | SB coverage test - need to verify MsgSendErrorCounter increments when msg too big | unit-test | **Is your feature request related to a problem? Please describe.**
Missing verification of MsgSendErrorCounter increment on message too big (requirement)
**Describe the solution you'd like**
Add verification
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC
| 1.0 | SB coverage test - need to verify MsgSendErrorCounter increments when msg too big - **Is your feature request related to a problem? Please describe.**
Missing verification of MsgSendErrorCounter increment on message too big (requirement)
**Describe the solution you'd like**
Add verification
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC
| test | sb coverage test need to verify msgsenderrorcounter increments when msg too big is your feature request related to a problem please describe missing verification of msgsenderrorcounter increment on message too big requirement describe the solution you d like add verification describe alternatives you ve considered none additional context none requester info jacob hageman nasa gsfc | 1 |
2,653 | 5,430,470,388 | IssuesEvent | 2017-03-03 21:20:29 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Ubuntu 16.04 outerloop debug - System.Diagnostics.Tests.ProcessWaitingTests.WaitChain failed with "Xunit.Sdk.EqualException" | area-System.Diagnostics.Process test bug test-run-core | Failed test: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_ubuntu16.04_debug/92/consoleText
Message:
~~~
System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL]
Assert.Equal() Failure
Expected: 42
Actual: 145
~~~
Stack Trace:
~~~
/mnt/j/workspace/dotnet_corefx/master/outerloop_ubuntu16.04_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs(190,0): at System.Diagnostics.Tests.ProcessWaitingTests.WaitChain()
~~~
Configuration:
OuterLoop_Ubuntu16.04_debug (build#92) | 1.0 | Ubuntu 16.04 outerloop debug - System.Diagnostics.Tests.ProcessWaitingTests.WaitChain failed with "Xunit.Sdk.EqualException" - Failed test: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_ubuntu16.04_debug/92/consoleText
Message:
~~~
System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL]
Assert.Equal() Failure
Expected: 42
Actual: 145
~~~
Stack Trace:
~~~
/mnt/j/workspace/dotnet_corefx/master/outerloop_ubuntu16.04_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs(190,0): at System.Diagnostics.Tests.ProcessWaitingTests.WaitChain()
~~~
Configuration:
OuterLoop_Ubuntu16.04_debug (build#92) | non_test | ubuntu outerloop debug system diagnostics tests processwaitingtests waitchain failed with xunit sdk equalexception failed test system diagnostics tests processwaitingtests waitchain detail message system diagnostics tests processwaitingtests waitchain assert equal failure expected actual stack trace mnt j workspace dotnet corefx master outerloop debug src system diagnostics process tests processwaitingtests cs at system diagnostics tests processwaitingtests waitchain configuration outerloop debug build | 0 |
690,878 | 23,675,767,170 | IssuesEvent | 2022-08-28 03:36:26 | angelside/zebra-password-changer-cli-py | https://api.github.com/repos/angelside/zebra-password-changer-cli-py | closed | 🔖 Invalid password message enhancement | priority: low status: pending type: enhancement | From
`"Please enter a 4 digit number!"`
to
`"Password is invalid! Please enter a 4 digit number."` | 1.0 | 🔖 Invalid password message enhancement - From
`"Please enter a 4 digit number!"`
to
`"Password is invalid! Please enter a 4 digit number."` | non_test | 🔖 invalid password message enhancement from please enter a digit number to password is invalid please enter a digit number | 0 |
75,764 | 15,495,568,909 | IssuesEvent | 2021-03-11 01:05:38 | ignatandrei/SimpleBookRental | https://api.github.com/repos/ignatandrei/SimpleBookRental | opened | CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz | security vulnerability | ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: SimpleBookRental/src/WebDashboard/book-dashboard/package.json</p>
<p>Path to vulnerable library: SimpleBookRental/src/WebDashboardNG/book-dash/node_modules/sockjs/package.json,SimpleBookRental/src/WebDashboardNG/book-dash/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.3.0.tgz (Root Library)
- webpack-dev-server-3.9.0.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: SimpleBookRental/src/WebDashboard/book-dashboard/package.json</p>
<p>Path to vulnerable library: SimpleBookRental/src/WebDashboardNG/book-dash/node_modules/sockjs/package.json,SimpleBookRental/src/WebDashboardNG/book-dash/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.3.0.tgz (Root Library)
- webpack-dev-server-3.9.0.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in sockjs tgz cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file simplebookrental src webdashboard book dashboard package json path to vulnerable library simplebookrental src webdashboardng book dash node modules sockjs package json simplebookrental src webdashboardng book dash node modules sockjs package json dependency hierarchy react scripts tgz root library webpack dev server tgz x sockjs tgz vulnerable library vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs step up your open source security game with whitesource | 0 |
87,580 | 17,332,103,223 | IssuesEvent | 2021-07-28 04:50:39 | BeccaLyria/discord-bot | https://api.github.com/repos/BeccaLyria/discord-bot | closed | [OTHER] - D-API V8 | 💻 aspect: code 🚧 status: blocked | # Other Issue
## Describe the issue
`discord.js` uses `v6` of the Discord API. This is fine, for the time being. However, at some point next year `v6` will be deprecated and the bot will need to transition to `v8`. (There's a story behind v7 being skipped...)
For now, **NO ACTION IS NEEDED** on this issue. I'm creating this as a reminder to keep an eye out for the next `major` release of `discord.js`, as this will be an upgrade to `v8`.
An important thing to keep an eye on will be the mandatory `intents`, which need to be declared in the connection string.
## Additional information
<!--Add any other context about the problem here.--> | 1.0 | [OTHER] - D-API V8 - # Other Issue
## Describe the issue
`discord.js` uses `v6` of the Discord API. This is fine, for the time being. However, at some point next year `v6` will be deprecated and the bot will need to transition to `v8`. (There's a story behind v7 being skipped...)
For now, **NO ACTION IS NEEDED** on this issue. I'm creating this as a reminder to keep an eye out for the next `major` release of `discord.js`, as this will be an upgrade to `v8`.
An important thing to keep an eye on will be the mandatory `intents`, which need to be declared in the connection string.
## Additional information
<!--Add any other context about the problem here.--> | non_test | d api other issue describe the issue discord js uses of the discord api this is fine for the time being however at some point next year will be deprecated and the bot will need to transition to there s a story behind being skipped for now no action is needed on this issue i m creating this as a reminder to keep an eye out for the next major release of discord js as this will be an upgrade to an important thing to keep an eye on will be the mandatory intents which need to be declared in the connection string additional information | 0 |
44,327 | 5,796,453,793 | IssuesEvent | 2017-05-02 19:30:28 | phetsims/states-of-matter | https://api.github.com/repos/phetsims/states-of-matter | closed | Items to Include in Teacher Tips | design:teaching-resources priority:2-high | From the Java Tips (some rewording necessary)
- [x] For solid water, the sim simplifies the model emphasizing that there is space between the
molecules. A resource for the most common visual for ice structure is
http://www1.lsbu.ac.uk/water/hexagonal_ice.html
- [x] The phase diagrams are shown qualitatively in the sim, to help students get a general
understanding of phase diagrams. Quantitative phase diagrams are shown for water, neon,
argon and oxygen on page 2 of these Tips. (Elaborate on this to address https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333.)
- [x] The sim is not designed to be used as a comprehensive tool for learning about phase
diagrams, instead the focus is on phases of matter. The small number of particles shown and
the simplicity of the underlying models makes it difficult to map accurately the exact phase to
the correct regions of the phase diagram. However, we felt there would be some benefit to
students being exposed to a simplified phase diagram. In the sim, the diagram marker remains
on the coexistence line between liquid/gas or solid/gas (and is extrapolated into the critical
region). If this approximation does not fit your specific learning goals, and you are concerned
this might cause confusion, you can encourage your students to keep the phase diagram
closed.
Additional model simplifications
- [x] Pressure will be zero at 0K because zero motion = zero momentum transfers between the particles and the container walls = zero pressure #154
- [x] It is possible to reach absolute zero, but the rate of temperature change slows down quite substantially as 0K is approached. This is intentional, since it is very difficult to make a system of molecules this cold. True absolute zero is impossible to achieve, so this should be thought of as rounding down from anything below 0.5K.
- [x] A note that we do not include Plasma on purpose (even though it is considered a state of matter).
- [x] Some amount of gravity is simulated, but it is minimal - just enough to keep the solid forms of the substances on the floor of the container. For this reason, substances in their liquid form don't always spread out along the bottom of the container, as, say, water does in a glass.
- [x] The model works best when there are at least (roughly) 15 particles in the container. It is possible to create situations where there are only a few particles in the container and, in these situations, users may observe some odd behaviors. One example is occasional visible changes to the velocity of individual particles. If students observe such things, they should be told that this is a due to the limitations of the model, and doesn't represent "real world" phenomena.
- [x] Equilibrium states -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333
- [ ] Limits of quantitative comparisons -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333
- [x] Latent heat is not really being addressed (but could discuss the breaking of the crystal lattice order or such) -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264054515
- [x] Temperature is allowed to change when particles are injected, based on the velocity of the particles -- #182
Other things to note
- [x] Bicycle pump indicator bars #167
- [x] A bit of explanation of "Return Lid" behavior, might be useful since it might not be completely obvious. basically that it captures the particles in the container at the time and the pump is "refilled" to a level that allows the max number of particles (I believe)
- [x] For younger students, it may be important to explain that the hand and the container are not at all to scale, since in the real world they too are made of atoms and molecules. | 1.0 | Items to Include in Teacher Tips - From the Java Tips (some rewording necessary)
- [x] For solid water, the sim simplifies the model emphasizing that there is space between the
molecules. A resource for the most common visual for ice structure is
http://www1.lsbu.ac.uk/water/hexagonal_ice.html
- [x] The phase diagrams are shown qualitatively in the sim, to help students get a general
understanding of phase diagrams. Quantitative phase diagrams are shown for water, neon,
argon and oxygen on page 2 of these Tips. (Elaborate on this to address https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333.)
- [x] The sim is not designed to be used as a comprehensive tool for learning about phase
diagrams, instead the focus is on phases of matter. The small number of particles shown and
the simplicity of the underlying models makes it difficult to map accurately the exact phase to
the correct regions of the phase diagram. However, we felt there would be some benefit to
students being exposed to a simplified phase diagram. In the sim, the diagram marker remains
on the coexistence line between liquid/gas or solid/gas (and is extrapolated into the critical
region). If this approximation does not fit your specific learning goals, and you are concerned
this might cause confusion, you can encourage your students to keep the phase diagram
closed.
Additional model simplifications
- [x] Pressure will be zero at 0K because zero motion = zero momentum transfers between the particles and the container walls = zero pressure #154
- [x] It is possible to reach absolute zero, but the rate of temperature change slows down quite substantially as 0K is approached. This is intentional, since it is very difficult to make a system of molecules this cold. True absolute zero is impossible to achieve, so this should be thought of as rounding down from anything below 0.5K.
- [x] A note that we do not include Plasma on purpose (even though it is considered a state of matter).
- [x] Some amount of gravity is simulated, but it is minimal - just enough to keep the solid forms of the substances on the floor of the container. For this reason, substances in their liquid form don't always spread out along the bottom of the container, as, say, water does in a glass.
- [x] The model works best when there are at least (roughly) 15 particles in the container. It is possible to create situations where there are only a few particles in the container and, in these situations, users may observe some odd behaviors. One example is occasional visible changes to the velocity of individual particles. If students observe such things, they should be told that this is a due to the limitations of the model, and doesn't represent "real world" phenomena.
- [x] Equilibrium states -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333
- [ ] Limits of quantitative comparisons -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264053333
- [x] Latent heat is not really being addressed (but could discuss the breaking of the crystal lattice order or such) -- https://github.com/phetsims/states-of-matter/issues/168#issuecomment-264054515
- [x] Temperature is allowed to change when particles are injected, based on the velocity of the particles -- #182
Other things to note
- [x] Bicycle pump indicator bars #167
- [x] A bit of explanation of "Return Lid" behavior, might be useful since it might not be completely obvious. basically that it captures the particles in the container at the time and the pump is "refilled" to a level that allows the max number of particles (I believe)
- [x] For younger students, it may be important to explain that the hand and the container are not at all to scale, since in the real world they too are made of atoms and molecules. | non_test | items to include in teacher tips from the java tips some rewording necessary for solid water the sim simplifies the model emphasizing that there is space between the molecules a resource for the most common visual for ice structure is the phase diagrams are shown qualitatively in the sim to help students get a general understanding of phase diagrams quantitative phase diagrams are shown for water neon argon and oxygen on page of these tips elaborate on this to address the sim is not designed to be used as a comprehensive tool for learning about phase diagrams instead the focus is on phases of matter the small number of particles shown and the simplicity of the underlying models makes it difficult to map accurately the exact phase to the correct regions of the phase diagram however we felt there would be some benefit to students being exposed to a simplified phase diagram in the sim the diagram marker remains on the coexistence line between liquid gas or solid gas and is extrapolated into the critical region if this approximation does not fit your specific learning goals and you are concerned this might cause confusion you can encourage your students to keep the phase diagram closed additional model simplifications pressure will be zero at because zero motion zero momentum transfers between the particles and the container walls zero pressure it is possible to reach absolute zero but the rate of temperature change slows down quite substantially as is approached this is intentional since it is very difficult to make a system of molecules this cold true absolute zero is impossible to achieve so this should be thought of as rounding down from anything below a note that we do not include plasma on purpose even though it is considered a state of matter some amount of gravity is simulated but it is minimal just enough to keep the solid forms of the substances on the floor of the container for this reason substances in their liquid form don t always spread out along the bottom of the container as say water does in a glass the model works best when there are at least roughly particles in the container it is possible to create situations where there are only a few particles in the container and in these situations users may observe some odd behaviors one example is occasional visible changes to the velocity of individual particles if students observe such things they should be told that this is a due to the limitations of the model and doesn t represent real world phenomena equilibrium states limits of quantitative comparisons latent heat is not really being addressed but could discuss the breaking of the crystal lattice order or such temperature is allowed to change when particles are injected based on the velocity of the particles other things to note bicycle pump indicator bars a bit of explanation of return lid behavior might be useful since it might not be completely obvious basically that it captures the particles in the container at the time and the pump is refilled to a level that allows the max number of particles i believe for younger students it may be important to explain that the hand and the container are not at all to scale since in the real world they too are made of atoms and molecules | 0 |
2,844 | 8,392,143,210 | IssuesEvent | 2018-10-09 16:46:19 | qTox/qTox | https://api.github.com/repos/qTox/qTox | opened | Store per profile settings in the database | I-architecture | Currently we have a per profile *.ini file as well as a sqlite database. To reduce complexity we should move everything stored in the *.ini file to the database.
The switch has the following advantages:
- less files that the user has to backup
- probably easier settings upgrades
Disadvantages:
- some work
- Need to find a good db schema that allows future upgrades and allows us to change datatypes later
@Diadlo @sphaerophoria what do you think? | 1.0 | Store per profile settings in the database - Currently we have a per profile *.ini file as well as a sqlite database. To reduce complexity we should move everything stored in the *.ini file to the database.
The switch has the following advantages:
- less files that the user has to backup
- probably easier settings upgrades
Disadvantages:
- some work
- Need to find a good db schema that allows future upgrades and allows us to change datatypes later
@Diadlo @sphaerophoria what do you think? | non_test | store per profile settings in the database currently we have a per profile ini file as well as a sqlite database to reduce complexity we should move everything stored in the ini file to the database the switch has the following advantages less files that the user has to backup probably easier settings upgrades disadvantages some work need to find a good db schema that allows future upgrades and allows us to change datatypes later diadlo sphaerophoria what do you think | 0 |
723,833 | 24,908,908,228 | IssuesEvent | 2022-10-29 16:08:23 | AY2223S1-CS2103T-W08-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-W08-1/tp | closed | [PE-D][Tester F] User guide has a feature in the top overview that is unimplemented | low priority | - Select bill unimplemented

<!--session: 1666945444121-db087f50-d06e-4bb8-900f-4a9e5dacf5ba--><!--Version: Web v3.4.4-->
-------------
Labels: `type.DocumentationBug` `severity.Medium`
original: yunruu/ped#7 | 1.0 | [PE-D][Tester F] User guide has a feature in the top overview that is unimplemented - - Select bill unimplemented

<!--session: 1666945444121-db087f50-d06e-4bb8-900f-4a9e5dacf5ba--><!--Version: Web v3.4.4-->
-------------
Labels: `type.DocumentationBug` `severity.Medium`
original: yunruu/ped#7 | non_test | user guide has a feature in the top overview that is unimplemented select bill unimplemented labels type documentationbug severity medium original yunruu ped | 0 |
745,002 | 25,965,250,032 | IssuesEvent | 2022-12-19 06:04:25 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | opened | template模板项目环境初始化/换新库软件包数据初始化报错 | priority: High | - [ ] 新开环境初始化

- [ ] 换新库
import seal error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429612, 19). Collection minimum is Timestamp(1671429613, 92)
import finance_receive error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429618, 4). Collection minimum is Timestamp(1671429618, 49)
import measurement_unit error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429618, 56). Collection minimum is Timestamp(1671429618, 74)
import okr_objective error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429619, 17). Collection minimum is Timestamp(1671429619, 53)
import contract_types error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429619, 117). Collection minimum is Timestamp(1671429621, 15)
import contract_types error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429621, 17). Collection minimum is Timestamp(1671429621, 22)
import meeting error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429621, 54). Collection minimum is Timestamp(1671429622, 9)
import project_log error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429622, 28). Collection minimum is Timestamp(1671429622, 70)
import cost_business_reimburse_detail error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 2). Collection minimum is Timestamp(1671429623, 15)
import finance_payment error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 40). Collection minimum is Timestamp(1671429623, 65)
import events error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 89). Collection minimum is Timestamp(1671429624, 31)
import cost_itinerary_information error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429624, 38). Collection minimum is Timestamp(1671429624, 60)
import tax_rates error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429624, 76). Collection minimum is Timestamp(1671429624, 101)
import currency error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429625, 60). Collection minimum is Timestamp(1671429626, 13)
import currency error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 15). Collection minimum is Timestamp(1671429626, 16)
import cost_schedule_reimburse_detail error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 56). Collection minimum is Timestamp(1671429626, 71)
import project_task error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 96). Collection minimum is Timestamp(1671429627, 92)
This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason:
Error: [not-authorized]
at MethodInvocation.apply (meteor://💻app/packages/steedos_base/server/methods/object_workflows.coffee:8:19)
at maybeAuditArgumentChecks (meteor://💻app/packages/ddp-server/livedata_server.js:1771:12)
at meteor://💻app/packages/ddp-server/livedata_server.js:1689:15
at Meteor.EnvironmentVariable.withValue (packages/meteor.js:1234:12)
at meteor://💻app/packages/ddp-server/livedata_server.js:1687:36
at new Promise (<anonymous>)
at Server.applyAsync (meteor://💻app/packages/ddp-server/livedata_server.js:1686:12)
at Server.apply (meteor://💻app/packages/ddp-server/livedata_server.js:1625:26)
at Server.call (meteor://💻app/packages/ddp-server/livedata_server.js:1607:17)
at /workspace/steedos-project-template/node_modules/@steedos/service-ui/main/default/routes/bootstrap.router.js:159:50
at Generator.next (<anonymous>)
at fulfilled (/workspace/steedos-project-template/node_modules/tslib/tslib.js:115:62)
at /workspace/steedos-project-template/node_modules/meteor-promise/fiber_pool.js:43:39
=> awaited here:
at Promise.await (/workspace/steedos-project-template/node_modules/meteor-promise/promise_server.js:60:12)
at Server.apply (meteor://💻app/packages/ddp-server/livedata_server.js:1638:22)
at Server.call (meteor://💻app/packages/ddp-server/livedata_server.js:1607:17)
at /workspace/steedos-project-template/node_modules/@steedos/service-ui/main/default/routes/bootstrap.router.js:159:50
at Generator.next (<anonymous>)
at fulfilled (/workspace/steedos-project-template/node_modules/tslib/tslib.js:115:62)
at /workspace/steedos-project-template/node_modules/meteor-promise/fiber_pool.js:43:39
^C
gitpod /workspace/steedos-project-template (master) $ [SIGINT]服务已停止: namespace: steedos, nodeID: ws-4fb57439-602a-4c70-a089-05d18628d988-2564 | 1.0 | template模板项目环境初始化/换新库软件包数据初始化报错 - - [ ] 新开环境初始化

- [ ] 换新库
import seal error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429612, 19). Collection minimum is Timestamp(1671429613, 92)
import finance_receive error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429618, 4). Collection minimum is Timestamp(1671429618, 49)
import measurement_unit error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429618, 56). Collection minimum is Timestamp(1671429618, 74)
import okr_objective error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429619, 17). Collection minimum is Timestamp(1671429619, 53)
import contract_types error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429619, 117). Collection minimum is Timestamp(1671429621, 15)
import contract_types error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429621, 17). Collection minimum is Timestamp(1671429621, 22)
import meeting error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429621, 54). Collection minimum is Timestamp(1671429622, 9)
import project_log error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429622, 28). Collection minimum is Timestamp(1671429622, 70)
import cost_business_reimburse_detail error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 2). Collection minimum is Timestamp(1671429623, 15)
import finance_payment error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 40). Collection minimum is Timestamp(1671429623, 65)
import events error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429623, 89). Collection minimum is Timestamp(1671429624, 31)
import cost_itinerary_information error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429624, 38). Collection minimum is Timestamp(1671429624, 60)
import tax_rates error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429624, 76). Collection minimum is Timestamp(1671429624, 101)
import currency error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429625, 60). Collection minimum is Timestamp(1671429626, 13)
import currency error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 15). Collection minimum is Timestamp(1671429626, 16)
import cost_schedule_reimburse_detail error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 56). Collection minimum is Timestamp(1671429626, 71)
import project_task error Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1671429626, 96). Collection minimum is Timestamp(1671429627, 92)
This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason:
Error: [not-authorized]
at MethodInvocation.apply (meteor://💻app/packages/steedos_base/server/methods/object_workflows.coffee:8:19)
at maybeAuditArgumentChecks (meteor://💻app/packages/ddp-server/livedata_server.js:1771:12)
at meteor://💻app/packages/ddp-server/livedata_server.js:1689:15
at Meteor.EnvironmentVariable.withValue (packages/meteor.js:1234:12)
at meteor://💻app/packages/ddp-server/livedata_server.js:1687:36
at new Promise (<anonymous>)
at Server.applyAsync (meteor://💻app/packages/ddp-server/livedata_server.js:1686:12)
at Server.apply (meteor://💻app/packages/ddp-server/livedata_server.js:1625:26)
at Server.call (meteor://💻app/packages/ddp-server/livedata_server.js:1607:17)
at /workspace/steedos-project-template/node_modules/@steedos/service-ui/main/default/routes/bootstrap.router.js:159:50
at Generator.next (<anonymous>)
at fulfilled (/workspace/steedos-project-template/node_modules/tslib/tslib.js:115:62)
at /workspace/steedos-project-template/node_modules/meteor-promise/fiber_pool.js:43:39
=> awaited here:
at Promise.await (/workspace/steedos-project-template/node_modules/meteor-promise/promise_server.js:60:12)
at Server.apply (meteor://💻app/packages/ddp-server/livedata_server.js:1638:22)
at Server.call (meteor://💻app/packages/ddp-server/livedata_server.js:1607:17)
at /workspace/steedos-project-template/node_modules/@steedos/service-ui/main/default/routes/bootstrap.router.js:159:50
at Generator.next (<anonymous>)
at fulfilled (/workspace/steedos-project-template/node_modules/tslib/tslib.js:115:62)
at /workspace/steedos-project-template/node_modules/meteor-promise/fiber_pool.js:43:39
^C
gitpod /workspace/steedos-project-template (master) $ [SIGINT]服务已停止: namespace: steedos, nodeID: ws-4fb57439-602a-4c70-a089-05d18628d988-2564 | non_test | template模板项目环境初始化 换新库软件包数据初始化报错 新开环境初始化 换新库 import seal error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import finance receive error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import measurement unit error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import okr objective error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import contract types error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import contract types error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import meeting error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import project log error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import cost business reimburse detail error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import finance payment error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import events error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import cost itinerary information error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import tax rates error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import currency error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import currency error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import cost schedule reimburse detail error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp import project task error unable to read from a snapshot due to pending collection catalog changes please retry the operation snapshot timestamp is timestamp collection minimum is timestamp this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch the promise rejected with the reason error at methodinvocation apply meteor 💻app packages steedos base server methods object workflows coffee at maybeauditargumentchecks meteor 💻app packages ddp server livedata server js at meteor 💻app packages ddp server livedata server js at meteor environmentvariable withvalue packages meteor js at meteor 💻app packages ddp server livedata server js at new promise at server applyasync meteor 💻app packages ddp server livedata server js at server apply meteor 💻app packages ddp server livedata server js at server call meteor 💻app packages ddp server livedata server js at workspace steedos project template node modules steedos service ui main default routes bootstrap router js at generator next at fulfilled workspace steedos project template node modules tslib tslib js at workspace steedos project template node modules meteor promise fiber pool js awaited here at promise await workspace steedos project template node modules meteor promise promise server js at server apply meteor 💻app packages ddp server livedata server js at server call meteor 💻app packages ddp server livedata server js at workspace steedos project template node modules steedos service ui main default routes bootstrap router js at generator next at fulfilled workspace steedos project template node modules tslib tslib js at workspace steedos project template node modules meteor promise fiber pool js c gitpod workspace steedos project template master 服务已停止 namespace steedos nodeid ws | 0 |
162,452 | 6,153,597,911 | IssuesEvent | 2017-06-28 10:21:39 | BinPar/PRM | https://api.github.com/repos/BinPar/PRM | opened | PRM VD : EUROS TOTALES EN JUEGO ¿A QUÉ SE REFIERE? | Priority: Low | 
El importe en euros debiera tener este formato ##.###,## €
@CristianBinpar @minigoBinpar @Al3xBinpar | 1.0 | PRM VD : EUROS TOTALES EN JUEGO ¿A QUÉ SE REFIERE? - 
El importe en euros debiera tener este formato ##.###,## €
@CristianBinpar @minigoBinpar @Al3xBinpar | non_test | prm vd euros totales en juego ¿a qué se refiere el importe en euros debiera tener este formato € cristianbinpar minigobinpar | 0 |
59,059 | 6,627,438,611 | IssuesEvent | 2017-09-23 02:43:37 | istio/istio | https://api.github.com/repos/istio/istio | reopened | ingress error in shared cluster | bug flaky-test help wanted oncall test-failure | W0913 05:50:45.201] E0913 05:50:45.199935 892 framework.go:197] Failed to complete Init. Error unable to find ingress ip
happens sporadically (probably related to a test finishing when one is starting)
e.g.
https://k8s-gubernator.appspot.com/build/istio-prow/pull/istio_istio/739/e2e-suite-rbac-auth/524/ | 2.0 | ingress error in shared cluster - W0913 05:50:45.201] E0913 05:50:45.199935 892 framework.go:197] Failed to complete Init. Error unable to find ingress ip
happens sporadically (probably related to a test finishing when one is starting)
e.g.
https://k8s-gubernator.appspot.com/build/istio-prow/pull/istio_istio/739/e2e-suite-rbac-auth/524/ | test | ingress error in shared cluster framework go failed to complete init error unable to find ingress ip happens sporadically probably related to a test finishing when one is starting e g | 1 |
26,639 | 4,236,987,361 | IssuesEvent | 2016-07-05 20:16:23 | bireme/bvs-noticias | https://api.github.com/repos/bireme/bvs-noticias | reopened | Quando o nível 2 não for utilizado, o conteúdo/ texto deve ocupar toda a página | task testing / validating | - Para ter um melhor aproveitamento do espaço.
- Melhora a navageção e visualização do conteúdo;
- O nível 2 deve ser mantido caso o usuário queira utilizá-lo.

| 1.0 | Quando o nível 2 não for utilizado, o conteúdo/ texto deve ocupar toda a página - - Para ter um melhor aproveitamento do espaço.
- Melhora a navageção e visualização do conteúdo;
- O nível 2 deve ser mantido caso o usuário queira utilizá-lo.

| test | quando o nível não for utilizado o conteúdo texto deve ocupar toda a página para ter um melhor aproveitamento do espaço melhora a navageção e visualização do conteúdo o nível deve ser mantido caso o usuário queira utilizá lo | 1 |
623,820 | 19,680,736,971 | IssuesEvent | 2022-01-11 16:30:57 | input-output-hk/cardano-node | https://api.github.com/repos/input-output-hk/cardano-node | closed | [FR] - Add CLI command to display the leadership schedule | enhancement priority medium cli revision API&CLI-Backlog | **Describe the feature you'd like**
I would like to know my leadership schedule, at least the nearest upcoming leadership event, so I can plan my node maintenance or/and KES rotating. | 1.0 | [FR] - Add CLI command to display the leadership schedule - **Describe the feature you'd like**
I would like to know my leadership schedule, at least the nearest upcoming leadership event, so I can plan my node maintenance or/and KES rotating. | non_test | add cli command to display the leadership schedule describe the feature you d like i would like to know my leadership schedule at least the nearest upcoming leadership event so i can plan my node maintenance or and kes rotating | 0 |
186,542 | 15,075,602,314 | IssuesEvent | 2021-02-05 02:29:25 | Solobrosco/site-update | https://api.github.com/repos/Solobrosco/site-update | closed | Fix tailwind dependencies | bug documentation | Tailwind depreciated some of their features.
Update the application and start it with no warnings
npm may need some updates too... | 1.0 | Fix tailwind dependencies - Tailwind depreciated some of their features.
Update the application and start it with no warnings
npm may need some updates too... | non_test | fix tailwind dependencies tailwind depreciated some of their features update the application and start it with no warnings npm may need some updates too | 0 |
82,024 | 31,858,990,869 | IssuesEvent | 2023-09-15 09:31:26 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Fix DefaultRecordMapper Javadoc to reflect actual behaviour | T: Defect C: Documentation P: Medium E: All Editions | A few "recent" changes aren't reflected correctly in the [`DefaultRecordMapper` Javadoc](https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/impl/DefaultRecordMapper.html):
- `ValueTypeMapper` isn't based on whether the user type `E` is a built-in type, but on whether the `ConverterProvider` can convert between `T1` (from `Record1<T1>`) and `E`
- `RecordToRecordMapper` has a higher priority than `ValueTypeMapper` (see also: https://github.com/jOOQ/jOOQ/issues/11148)
- `ProxyMapper` is listed way too far down
----
See also:
https://groups.google.com/g/jooq-user/c/XMD2Rc0kEqQ | 1.0 | Fix DefaultRecordMapper Javadoc to reflect actual behaviour - A few "recent" changes aren't reflected correctly in the [`DefaultRecordMapper` Javadoc](https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/impl/DefaultRecordMapper.html):
- `ValueTypeMapper` isn't based on whether the user type `E` is a built-in type, but on whether the `ConverterProvider` can convert between `T1` (from `Record1<T1>`) and `E`
- `RecordToRecordMapper` has a higher priority than `ValueTypeMapper` (see also: https://github.com/jOOQ/jOOQ/issues/11148)
- `ProxyMapper` is listed way too far down
----
See also:
https://groups.google.com/g/jooq-user/c/XMD2Rc0kEqQ | non_test | fix defaultrecordmapper javadoc to reflect actual behaviour a few recent changes aren t reflected correctly in the valuetypemapper isn t based on whether the user type e is a built in type but on whether the converterprovider can convert between from and e recordtorecordmapper has a higher priority than valuetypemapper see also proxymapper is listed way too far down see also | 0 |
180,184 | 13,925,153,826 | IssuesEvent | 2020-10-21 16:27:40 | EnterpriseDB/edb-ansible | https://api.github.com/repos/EnterpriseDB/edb-ansible | closed | Move with_dict servers in the roles to manage and iteration of roles' tasks | In_Testing REL_1_0_1 enhancement | Currently, users need to add with_dict as given below in the playbook to use the roles.
```
- name: Iterate through repo role with items from hosts file
include_role:
name: setup_repo
with_dict: "{{ servers }}"
```
It will be simpler if we can skip using with_dict in the playbook. Something like given below:
```
roles:
setup_repo
``` | 1.0 | Move with_dict servers in the roles to manage and iteration of roles' tasks - Currently, users need to add with_dict as given below in the playbook to use the roles.
```
- name: Iterate through repo role with items from hosts file
include_role:
name: setup_repo
with_dict: "{{ servers }}"
```
It will be simpler if we can skip using with_dict in the playbook. Something like given below:
```
roles:
setup_repo
``` | test | move with dict servers in the roles to manage and iteration of roles tasks currently users need to add with dict as given below in the playbook to use the roles name iterate through repo role with items from hosts file include role name setup repo with dict servers it will be simpler if we can skip using with dict in the playbook something like given below roles setup repo | 1 |
132,188 | 12,500,389,738 | IssuesEvent | 2020-06-01 22:09:42 | ibm-garage-cloud/planning | https://api.github.com/repos/ibm-garage-cloud/planning | closed | Rename account administrator to account manager | Persona: SysAdmin documentation enhancement | One of the three access groups is called account administrator, created by the `terraform/scripts/acp-acct` script. Rename this role from account administrator to account manager so as to better distinguish it from the environment administrator role and to align it with the name of the main IAM policy role for this user, the `account-management` role.
Requires:
- [ ] Update docs to refer to new name
- [ ] Rename script to new name
| 1.0 | Rename account administrator to account manager - One of the three access groups is called account administrator, created by the `terraform/scripts/acp-acct` script. Rename this role from account administrator to account manager so as to better distinguish it from the environment administrator role and to align it with the name of the main IAM policy role for this user, the `account-management` role.
Requires:
- [ ] Update docs to refer to new name
- [ ] Rename script to new name
| non_test | rename account administrator to account manager one of the three access groups is called account administrator created by the terraform scripts acp acct script rename this role from account administrator to account manager so as to better distinguish it from the environment administrator role and to align it with the name of the main iam policy role for this user the account management role requires update docs to refer to new name rename script to new name | 0 |
114,442 | 17,209,458,700 | IssuesEvent | 2021-07-19 00:12:57 | turkdevops/javascript-sdk | https://api.github.com/repos/turkdevops/javascript-sdk | opened | WS-2019-0183 (Medium) detected in lodash.defaultsdeep-4.3.2.tgz | security vulnerability | ## WS-2019-0183 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash.defaultsdeep-4.3.2.tgz</b></p></summary>
<p>The lodash method `_.defaultsDeep` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.defaultsdeep/-/lodash.defaultsdeep-4.3.2.tgz">https://registry.npmjs.org/lodash.defaultsdeep/-/lodash.defaultsdeep-4.3.2.tgz</a></p>
<p>Path to dependency file: javascript-sdk/package.json</p>
<p>Path to vulnerable library: javascript-sdk/node_modules/lodash.defaultsdeep/package.json</p>
<p>
Dependency Hierarchy:
- nightwatch-0.9.21.tgz (Root Library)
- :x: **lodash.defaultsdeep-4.3.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/javascript-sdk/commit/2ed96566365ee89d8a9b1250ccd7c049281ed09c">2ed96566365ee89d8a9b1250ccd7c049281ed09c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash.defaultsdeep before 4.6.1 is vulnerable to prototype pollution. The function mergeWith() may allow a malicious user to modify the prototype of Object via {constructor: {prototype: {...}}} causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2019-08-14
<p>URL: <a href=https://github.com/lodash/lodash/compare/4.6.0...4.6.1>WS-2019-0183</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1068">https://www.npmjs.com/advisories/1068</a></p>
<p>Release Date: 2019-08-14</p>
<p>Fix Resolution: 4.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0183 (Medium) detected in lodash.defaultsdeep-4.3.2.tgz - ## WS-2019-0183 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash.defaultsdeep-4.3.2.tgz</b></p></summary>
<p>The lodash method `_.defaultsDeep` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.defaultsdeep/-/lodash.defaultsdeep-4.3.2.tgz">https://registry.npmjs.org/lodash.defaultsdeep/-/lodash.defaultsdeep-4.3.2.tgz</a></p>
<p>Path to dependency file: javascript-sdk/package.json</p>
<p>Path to vulnerable library: javascript-sdk/node_modules/lodash.defaultsdeep/package.json</p>
<p>
Dependency Hierarchy:
- nightwatch-0.9.21.tgz (Root Library)
- :x: **lodash.defaultsdeep-4.3.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/javascript-sdk/commit/2ed96566365ee89d8a9b1250ccd7c049281ed09c">2ed96566365ee89d8a9b1250ccd7c049281ed09c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash.defaultsdeep before 4.6.1 is vulnerable to prototype pollution. The function mergeWith() may allow a malicious user to modify the prototype of Object via {constructor: {prototype: {...}}} causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2019-08-14
<p>URL: <a href=https://github.com/lodash/lodash/compare/4.6.0...4.6.1>WS-2019-0183</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1068">https://www.npmjs.com/advisories/1068</a></p>
<p>Release Date: 2019-08-14</p>
<p>Fix Resolution: 4.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws medium detected in lodash defaultsdeep tgz ws medium severity vulnerability vulnerable library lodash defaultsdeep tgz the lodash method defaultsdeep exported as a module library home page a href path to dependency file javascript sdk package json path to vulnerable library javascript sdk node modules lodash defaultsdeep package json dependency hierarchy nightwatch tgz root library x lodash defaultsdeep tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash defaultsdeep before is vulnerable to prototype pollution the function mergewith may allow a malicious user to modify the prototype of object via constructor prototype causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
198,862 | 6,978,497,817 | IssuesEvent | 2017-12-12 17:42:46 | JKGDevs/JediKnightGalaxies | https://api.github.com/repos/JKGDevs/JediKnightGalaxies | opened | Animation freezes after being kicked | bug priority:medium | 
If another player kicks you, after getting up your animation will freeze and you'll be unable to move or aim. However, you can get out of the "freeze" by switching to melee and then performing a kick. This will resume the animation and unstuck you. | 1.0 | Animation freezes after being kicked - 
If another player kicks you, after getting up your animation will freeze and you'll be unable to move or aim. However, you can get out of the "freeze" by switching to melee and then performing a kick. This will resume the animation and unstuck you. | non_test | animation freezes after being kicked if another player kicks you after getting up your animation will freeze and you ll be unable to move or aim however you can get out of the freeze by switching to melee and then performing a kick this will resume the animation and unstuck you | 0 |
156,238 | 12,302,254,744 | IssuesEvent | 2020-05-11 16:40:12 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | [ Latest Posts ] Max. number of words control doesn't react to keyboard input | [Block] Latest Posts [Status] In Progress [Type] Bug | **Describe the bug**
The default value for excerpt display is 55 words. There are three methods available, it seems, to modify that number in the sidebar
1) Slider
2) up/down 1
3) Type in a new value.
The third method, typing in a new value, doesn't work.
**To reproduce**
Steps to reproduce the behavior:
1. Insert Latest Posts block
2. Open the Block settings sidebar
3. Enable Post Content
4. Check Excerpt
5. Try modify the number of words displayed with the keyboard
**Expected behavior**
I would have expected that I would be able to modify the number by typing in the right value.
**Screenshots**

**Desktop (please complete the following information):**
- OS: [e.g. iOS] 10.15.3
- Browser [e.g. chrome, safari] Chrome
- Version [e.g. 22] 80.0.3987.163
**Additional context**
Gutenberg 7.8.1
WordPress 5.4
| 1.0 | [ Latest Posts ] Max. number of words control doesn't react to keyboard input - **Describe the bug**
The default value for excerpt display is 55 words. There are three methods available, it seems, to modify that number in the sidebar
1) Slider
2) up/down 1
3) Type in a new value.
The third method, typing in a new value, doesn't work.
**To reproduce**
Steps to reproduce the behavior:
1. Insert Latest Posts block
2. Open the Block settings sidebar
3. Enable Post Content
4. Check Excerpt
5. Try modify the number of words displayed with the keyboard
**Expected behavior**
I would have expected that I would be able to modify the number by typing in the right value.
**Screenshots**

**Desktop (please complete the following information):**
- OS: [e.g. iOS] 10.15.3
- Browser [e.g. chrome, safari] Chrome
- Version [e.g. 22] 80.0.3987.163
**Additional context**
Gutenberg 7.8.1
WordPress 5.4
| test | max number of words control doesn t react to keyboard input describe the bug the default value for excerpt display is words there are three methods available it seems to modify that number in the sidebar slider up down type in a new value the third method typing in a new value doesn t work to reproduce steps to reproduce the behavior insert latest posts block open the block settings sidebar enable post content check excerpt try modify the number of words displayed with the keyboard expected behavior i would have expected that i would be able to modify the number by typing in the right value screenshots desktop please complete the following information os browser chrome version additional context gutenberg wordpress | 1 |
4,747 | 3,882,125,754 | IssuesEvent | 2016-04-13 08:39:02 | picnicss/picnic | https://api.github.com/repos/picnicss/picnic | closed | Update the documentation | usability | Some plugin's docs are old, not updated or broken. Update it and:
- Change the button "Plugins" for "Documentation" which has a brief intro and a couple of links to Github.
- Change "Documentation" for "Getting Started" (which links to the section in the README in github)
- Link to the CONTRIBUTING.md guide or an appropriate place in github (as developers use github nowadays anyway) within the new "Documentation". | True | Update the documentation - Some plugin's docs are old, not updated or broken. Update it and:
- Change the button "Plugins" for "Documentation" which has a brief intro and a couple of links to Github.
- Change "Documentation" for "Getting Started" (which links to the section in the README in github)
- Link to the CONTRIBUTING.md guide or an appropriate place in github (as developers use github nowadays anyway) within the new "Documentation". | non_test | update the documentation some plugin s docs are old not updated or broken update it and change the button plugins for documentation which has a brief intro and a couple of links to github change documentation for getting started which links to the section in the readme in github link to the contributing md guide or an appropriate place in github as developers use github nowadays anyway within the new documentation | 0 |
487,264 | 14,021,306,218 | IssuesEvent | 2020-10-29 21:03:56 | opendifferentialprivacy/smartnoise-sdk | https://api.github.com/repos/opendifferentialprivacy/smartnoise-sdk | closed | The SQL thresholding math seems off: tau uses a Laplace formula, but the noise is Gaussian | Priority 1: High bug | Looking at `private_reader.py`, it seems that you're using [Gaussian](https://github.com/opendifferentialprivacy/whitenoise-system/blob/60743c0a6bd2002296764a5c388e999c2b99a7dd/sdk/opendp/whitenoise/sql/private_reader.py#L156) as a default mechanism, but the threshold `tau` is computed [based on a Laplace formula](https://github.com/opendifferentialprivacy/whitenoise-system/blob/60743c0a6bd2002296764a5c388e999c2b99a7dd/sdk/opendp/whitenoise/sql/private_reader.py#L123), so it's probably wrong.
This file also imports Laplace but doesn't use it. This should probably be caught by a linter. | 1.0 | The SQL thresholding math seems off: tau uses a Laplace formula, but the noise is Gaussian - Looking at `private_reader.py`, it seems that you're using [Gaussian](https://github.com/opendifferentialprivacy/whitenoise-system/blob/60743c0a6bd2002296764a5c388e999c2b99a7dd/sdk/opendp/whitenoise/sql/private_reader.py#L156) as a default mechanism, but the threshold `tau` is computed [based on a Laplace formula](https://github.com/opendifferentialprivacy/whitenoise-system/blob/60743c0a6bd2002296764a5c388e999c2b99a7dd/sdk/opendp/whitenoise/sql/private_reader.py#L123), so it's probably wrong.
This file also imports Laplace but doesn't use it. This should probably be caught by a linter. | non_test | the sql thresholding math seems off tau uses a laplace formula but the noise is gaussian looking at private reader py it seems that you re using as a default mechanism but the threshold tau is computed so it s probably wrong this file also imports laplace but doesn t use it this should probably be caught by a linter | 0 |
104,850 | 9,011,424,890 | IssuesEvent | 2019-02-05 14:40:16 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: backup2TB failed | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/af891db5e120ccc272bb9a10482ac42d263a185e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=backup2TB PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1124507&tab=buildLog
```
The test failed on release-2.1:
test.go:743,test.go:755: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod create teamcity-1124507-backup2tb -n 10 --gce-machine-type=n1-standard-4 --gce-zones=us-central1-b,us-west1-b,europe-west2-b --local-ssd-no-ext4-barrier returned:
stderr:
stdout:
Creating cluster teamcity-1124507-backup2tb with 10 nodes
Unable to create cluster:
in provider: gce: Command: gcloud [compute instances create --subnet default --maintenance-policy MIGRATE --scopes default,storage-rw --image ubuntu-1604-xenial-v20190122a --image-project ubuntu-os-cloud --boot-disk-size 10 --boot-disk-type pd-ssd --service-account 21965078311-compute@developer.gserviceaccount.com --local-ssd interface=SCSI --machine-type n1-standard-4 --labels lifetime=12h0m0s --metadata-from-file startup-script=/home/agent/temp/buildTmp/gce-startup-script954506188 --project cockroach-ephemeral]
Output: ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The zone 'projects/cockroach-ephemeral/zones/us-central1-b' does not have enough resources available to fulfill the request. '(resource type:pd-ssd)'.
: exit status 1
Cleaning up...
: exit status 1
``` | 1.0 | roachtest: backup2TB failed - SHA: https://github.com/cockroachdb/cockroach/commits/af891db5e120ccc272bb9a10482ac42d263a185e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=backup2TB PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1124507&tab=buildLog
```
The test failed on release-2.1:
test.go:743,test.go:755: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod create teamcity-1124507-backup2tb -n 10 --gce-machine-type=n1-standard-4 --gce-zones=us-central1-b,us-west1-b,europe-west2-b --local-ssd-no-ext4-barrier returned:
stderr:
stdout:
Creating cluster teamcity-1124507-backup2tb with 10 nodes
Unable to create cluster:
in provider: gce: Command: gcloud [compute instances create --subnet default --maintenance-policy MIGRATE --scopes default,storage-rw --image ubuntu-1604-xenial-v20190122a --image-project ubuntu-os-cloud --boot-disk-size 10 --boot-disk-type pd-ssd --service-account 21965078311-compute@developer.gserviceaccount.com --local-ssd interface=SCSI --machine-type n1-standard-4 --labels lifetime=12h0m0s --metadata-from-file startup-script=/home/agent/temp/buildTmp/gce-startup-script954506188 --project cockroach-ephemeral]
Output: ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The zone 'projects/cockroach-ephemeral/zones/us-central1-b' does not have enough resources available to fulfill the request. '(resource type:pd-ssd)'.
: exit status 1
Cleaning up...
: exit status 1
``` | test | roachtest failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on release test go test go home agent work go src github com cockroachdb cockroach bin roachprod create teamcity n gce machine type standard gce zones us b us b europe b local ssd no barrier returned stderr stdout creating cluster teamcity with nodes unable to create cluster in provider gce command gcloud output error gcloud compute instances create could not fetch resource the zone projects cockroach ephemeral zones us b does not have enough resources available to fulfill the request resource type pd ssd exit status cleaning up exit status | 1 |
14,875 | 3,428,332,967 | IssuesEvent | 2015-12-10 08:59:06 | bedita/bedita | https://api.github.com/repos/bedita/bedita | closed | Managing dateItems before 1000 AD | Module - Events Priority - High Status - Test Topic - Database Type - Enhancement | In some projects we need to handle dateitems for events and people (card) in remote past.
Current date_items Bedita table is using mysql *datetime* data type, so we can't manage dates before 1000 AC unless *sqlmode_allow_invalid_dates* is enabled.
http://dev.mysql.com/doc/refman/5.6/en/sql-mode.html#sqlmode_allow_invalid_dates
Even so, dates BC aren't possible.
So how can we:
- handle dates BC in events and cards (and maybe in other models)
- store a generic 'YEAR' value (2015-00-00??) without months and days
?
| 1.0 | Managing dateItems before 1000 AD - In some projects we need to handle dateitems for events and people (card) in remote past.
Current date_items Bedita table is using mysql *datetime* data type, so we can't manage dates before 1000 AC unless *sqlmode_allow_invalid_dates* is enabled.
http://dev.mysql.com/doc/refman/5.6/en/sql-mode.html#sqlmode_allow_invalid_dates
Even so, dates BC aren't possible.
So how can we:
- handle dates BC in events and cards (and maybe in other models)
- store a generic 'YEAR' value (2015-00-00??) without months and days
?
| test | managing dateitems before ad in some projects we need to handle dateitems for events and people card in remote past current date items bedita table is using mysql datetime data type so we can t manage dates before ac unless sqlmode allow invalid dates is enabled even so dates bc aren t possible so how can we handle dates bc in events and cards and maybe in other models store a generic year value without months and days | 1 |
118,881 | 4,757,273,096 | IssuesEvent | 2016-10-24 16:07:46 | Pliohub/Plio | https://api.github.com/repos/Pliohub/Plio | opened | Work items not deleting from Plio home screen | bug top priority | This is a duplicate of an existing issue number (not sure which one). It is now extremely high priority to fix, since I can't create marketing screenshots with this data in the screen. | 1.0 | Work items not deleting from Plio home screen - This is a duplicate of an existing issue number (not sure which one). It is now extremely high priority to fix, since I can't create marketing screenshots with this data in the screen. | non_test | work items not deleting from plio home screen this is a duplicate of an existing issue number not sure which one it is now extremely high priority to fix since i can t create marketing screenshots with this data in the screen | 0 |
23,378 | 4,015,820,233 | IssuesEvent | 2016-05-15 06:13:02 | Gamealition/Survival-Skripts | https://api.github.com/repos/Gamealition/Survival-Skripts | closed | Reuben not detecting when Firework used on him | bug needs testing | - **Example code:**
```
# Sneak whilst holding any firework
on sneak toggle:
if player's held item is firework:
send "*** Holding firework"
```
- **Result:** Skript does not appear to detect held firework
| 1.0 | Reuben not detecting when Firework used on him - - **Example code:**
```
# Sneak whilst holding any firework
on sneak toggle:
if player's held item is firework:
send "*** Holding firework"
```
- **Result:** Skript does not appear to detect held firework
| test | reuben not detecting when firework used on him example code sneak whilst holding any firework on sneak toggle if player s held item is firework send holding firework result skript does not appear to detect held firework | 1 |
9,321 | 6,844,700,864 | IssuesEvent | 2017-11-13 03:24:01 | polymec/polymec-dev | https://api.github.com/repos/polymec/polymec-dev | opened | Lua and C memory management needs improvement | bug performance | Currently, the garbage collected objects in C are managed with the Boehm collector, and Lua's collector is vanilla malloc/realloc/free. This results in some truly astonishing scenarios when lots of small C-garbage-collected objects are created in Lua.
Aside from this, our use of the Boehm collector kind of precludes us from using other allocators, since Boehm's GC scheme needs to keep track of the entire heap. Strategies for not using the Boehm collector include
1. Use lua's garbage collector for C garbage collection (not all objects, just designated small ones). This would be nice because it would unify the C and Lua garbage collection scheme. It would be suboptimal because it would not be clear how to collect objects that exist purely in C without deleting their reference from the Lua registry.
2. Use reference counting for C. This makes "collection" deterministic but the inconsistency still exists between Lua and C's idea of who is alive amongst collectible objects. I think if we were using GC for large C objects this idea might make more sense. As it stands now I'm not in favor of this option.
3. Find another less all-encompassing GC than the Boehm collector. Collectors for C/C++ aren't thick on the ground, though, and this would still involve two garbage collectors in one codebase.
At the moment, option 1 offers the most simplicity with a penalty of longer lifetimes for small C-only objects. Does it matter? It may be worth a try, especially if we can figure out how to tell Lua that we're done with an object. | True | Lua and C memory management needs improvement - Currently, the garbage collected objects in C are managed with the Boehm collector, and Lua's collector is vanilla malloc/realloc/free. This results in some truly astonishing scenarios when lots of small C-garbage-collected objects are created in Lua.
Aside from this, our use of the Boehm collector kind of precludes us from using other allocators, since Boehm's GC scheme needs to keep track of the entire heap. Strategies for not using the Boehm collector include
1. Use lua's garbage collector for C garbage collection (not all objects, just designated small ones). This would be nice because it would unify the C and Lua garbage collection scheme. It would be suboptimal because it would not be clear how to collect objects that exist purely in C without deleting their reference from the Lua registry.
2. Use reference counting for C. This makes "collection" deterministic but the inconsistency still exists between Lua and C's idea of who is alive amongst collectible objects. I think if we were using GC for large C objects this idea might make more sense. As it stands now I'm not in favor of this option.
3. Find another less all-encompassing GC than the Boehm collector. Collectors for C/C++ aren't thick on the ground, though, and this would still involve two garbage collectors in one codebase.
At the moment, option 1 offers the most simplicity with a penalty of longer lifetimes for small C-only objects. Does it matter? It may be worth a try, especially if we can figure out how to tell Lua that we're done with an object. | non_test | lua and c memory management needs improvement currently the garbage collected objects in c are managed with the boehm collector and lua s collector is vanilla malloc realloc free this results in some truly astonishing scenarios when lots of small c garbage collected objects are created in lua aside from this our use of the boehm collector kind of precludes us from using other allocators since boehm s gc scheme needs to keep track of the entire heap strategies for not using the boehm collector include use lua s garbage collector for c garbage collection not all objects just designated small ones this would be nice because it would unify the c and lua garbage collection scheme it would be suboptimal because it would not be clear how to collect objects that exist purely in c without deleting their reference from the lua registry use reference counting for c this makes collection deterministic but the inconsistency still exists between lua and c s idea of who is alive amongst collectible objects i think if we were using gc for large c objects this idea might make more sense as it stands now i m not in favor of this option find another less all encompassing gc than the boehm collector collectors for c c aren t thick on the ground though and this would still involve two garbage collectors in one codebase at the moment option offers the most simplicity with a penalty of longer lifetimes for small c only objects does it matter it may be worth a try especially if we can figure out how to tell lua that we re done with an object | 0 |
68,702 | 7,107,911,235 | IssuesEvent | 2018-01-16 21:44:10 | asottile/git-code-debt | https://api.github.com/repos/asottile/git-code-debt | closed | Add selenium tests | test-coverage | This currently needs some real browser integration tests as js is now a major part of the flow.
Currently things that need testing:
- Expand / collapse works on index
- Expand / collapse is sticky on index (refresh page and re-verify)
- Date picker works on graph
| 1.0 | Add selenium tests - This currently needs some real browser integration tests as js is now a major part of the flow.
Currently things that need testing:
- Expand / collapse works on index
- Expand / collapse is sticky on index (refresh page and re-verify)
- Date picker works on graph
| test | add selenium tests this currently needs some real browser integration tests as js is now a major part of the flow currently things that need testing expand collapse works on index expand collapse is sticky on index refresh page and re verify date picker works on graph | 1 |
292,604 | 25,225,318,679 | IssuesEvent | 2022-11-14 15:34:48 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [rom-e2e] rom_e2e_debug | Type:Task SW:ROM Milestone:V2 Component:Rom/E2e/Test | **Testpoint name:** [rom_e2e_debug](https://cs.opensource.google/opentitan/opentitan/+/master:sw/device/silicon_creator/rom/data/rom_e2e_testplan.hjson?q=rom_e2e_debug)
**Contact person:** @alphan
**Description:** Verify that ROM can be debugged in appropriate life cycle states.
`CREATOR_SW_CFG_ROM_EXEC_EN` should be set to `0`.
- Verify that ROM can be debugged in TEST, DEV, and RMA life cycle states.
- Test debugging with commands to GDB connected via OpenOCD and JTAG
- Read back GDB responses to check they match expected behaviour
- Connect a debugger and verify that ROM halts very early in `rom_start.S`.
- Trial the following activities within GDB, ensuring the correct
behaviour is seen:
- Halting execution and resetting
- Setting, hitting and deleting breakpoints using all available hardware breakpoints
- Single stepping
- In particular single step over `wfi`
- Reading and writing all registers
- Reading all CSRs and writing some (write set TBD)
- Reading and writing memory (both SRAM and device)
- Setting the PC to jump to some location
- Executing code from GDB (using the call command)
- Verify that ROM fails to boot with `BFV:0142500d`.
| 1.0 | [rom-e2e] rom_e2e_debug - **Testpoint name:** [rom_e2e_debug](https://cs.opensource.google/opentitan/opentitan/+/master:sw/device/silicon_creator/rom/data/rom_e2e_testplan.hjson?q=rom_e2e_debug)
**Contact person:** @alphan
**Description:** Verify that ROM can be debugged in appropriate life cycle states.
`CREATOR_SW_CFG_ROM_EXEC_EN` should be set to `0`.
- Verify that ROM can be debugged in TEST, DEV, and RMA life cycle states.
- Test debugging with commands to GDB connected via OpenOCD and JTAG
- Read back GDB responses to check they match expected behaviour
- Connect a debugger and verify that ROM halts very early in `rom_start.S`.
- Trial the following activities within GDB, ensuring the correct
behaviour is seen:
- Halting execution and resetting
- Setting, hitting and deleting breakpoints using all available hardware breakpoints
- Single stepping
- In particular single step over `wfi`
- Reading and writing all registers
- Reading all CSRs and writing some (write set TBD)
- Reading and writing memory (both SRAM and device)
- Setting the PC to jump to some location
- Executing code from GDB (using the call command)
- Verify that ROM fails to boot with `BFV:0142500d`.
| test | rom debug testpoint name contact person alphan description verify that rom can be debugged in appropriate life cycle states creator sw cfg rom exec en should be set to verify that rom can be debugged in test dev and rma life cycle states test debugging with commands to gdb connected via openocd and jtag read back gdb responses to check they match expected behaviour connect a debugger and verify that rom halts very early in rom start s trial the following activities within gdb ensuring the correct behaviour is seen halting execution and resetting setting hitting and deleting breakpoints using all available hardware breakpoints single stepping in particular single step over wfi reading and writing all registers reading all csrs and writing some write set tbd reading and writing memory both sram and device setting the pc to jump to some location executing code from gdb using the call command verify that rom fails to boot with bfv | 1 |
338,427 | 30,297,405,347 | IssuesEvent | 2023-07-10 00:58:11 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | DISABLED test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16 (__main__.TestForeachCUDA) | triaged module: flaky-tests skipped module: mta | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16&suite=TestForeachCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @crcrpar @mcarilli @ngimel | 1.0 | DISABLED test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16 (__main__.TestForeachCUDA) - Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16&suite=TestForeachCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @crcrpar @mcarilli @ngimel | test | disabled test binary op foreach clamp max is fastpath true cuda main testforeachcuda platforms rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test binary op foreach clamp max is fastpath true cuda there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path test foreach py cc crcrpar mcarilli ngimel | 1 |
62,823 | 6,817,835,881 | IssuesEvent | 2017-11-07 01:33:50 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | flutter_driver's waitUntilNoTransientCallbacks doesn't await Image asset loading | dev: tests dev: tool dev: tool - drive team: flakes | I have a screenshot test using flutter_driver where I'm taking a screenshot of a Widget which contains an Image widget which is created via `new Image.asset('myasset.png', key: const Key('a'))`. The test code looks like this:
```
test('Widget with Image', () async {
await driver.waitFor(find.byValueKey('a'));
await driver.waitUntilNoTransientCallbacks();
await scuba.diffScreenshot('opportunities');
});
```
This takes a screenshot where the Image is in the tree (I can tell because it changes the alignment of the other elements in the Widget), but has not yet been loaded (it's blank). Very occasionally (<5% of the time), the test will take a screenshot where the image is loaded and visible.
If I add `await new Future.delayed(const Duration(seconds: 1));`, then the Image is always loaded, but this is likely wasting some amount of time and/or prone to flaking. I think that `waitUntilNoTransientCallbacks` is probably supposed to be await'ing whatever asset loading is happening in the background; if not, could we expose some other hook that would enable this?
Thanks! | 1.0 | flutter_driver's waitUntilNoTransientCallbacks doesn't await Image asset loading - I have a screenshot test using flutter_driver where I'm taking a screenshot of a Widget which contains an Image widget which is created via `new Image.asset('myasset.png', key: const Key('a'))`. The test code looks like this:
```
test('Widget with Image', () async {
await driver.waitFor(find.byValueKey('a'));
await driver.waitUntilNoTransientCallbacks();
await scuba.diffScreenshot('opportunities');
});
```
This takes a screenshot where the Image is in the tree (I can tell because it changes the alignment of the other elements in the Widget), but has not yet been loaded (it's blank). Very occasionally (<5% of the time), the test will take a screenshot where the image is loaded and visible.
If I add `await new Future.delayed(const Duration(seconds: 1));`, then the Image is always loaded, but this is likely wasting some amount of time and/or prone to flaking. I think that `waitUntilNoTransientCallbacks` is probably supposed to be await'ing whatever asset loading is happening in the background; if not, could we expose some other hook that would enable this?
Thanks! | test | flutter driver s waituntilnotransientcallbacks doesn t await image asset loading i have a screenshot test using flutter driver where i m taking a screenshot of a widget which contains an image widget which is created via new image asset myasset png key const key a the test code looks like this test widget with image async await driver waitfor find byvaluekey a await driver waituntilnotransientcallbacks await scuba diffscreenshot opportunities this takes a screenshot where the image is in the tree i can tell because it changes the alignment of the other elements in the widget but has not yet been loaded it s blank very occasionally of the time the test will take a screenshot where the image is loaded and visible if i add await new future delayed const duration seconds then the image is always loaded but this is likely wasting some amount of time and or prone to flaking i think that waituntilnotransientcallbacks is probably supposed to be await ing whatever asset loading is happening in the background if not could we expose some other hook that would enable this thanks | 1 |
183,144 | 14,212,420,833 | IssuesEvent | 2020-11-17 00:02:19 | GeoscienceAustralia/dea-airflow | https://api.github.com/repos/GeoscienceAustralia/dea-airflow | opened | kubenetes resources bug | bug tests | When we upgrade airflow version, this will be updated.
```
# this is a hack
# as of airflow 1.10.11, setting resources key seems to have a bug
# https://github.com/apache/airflow/issues/9827
# it adds superfluous keys to the specs with nulls as values and k8s does not like that
# it probably was fixed here: https://github.com/apache/airflow/pull/10084
# once we upgrade airflow, this hack will not be need anymore
class PatchedResources(Resources):
def to_k8s_client_obj(self):
result = dict(requests={}, limits={})
if self.request_memory is not None:
result["requests"]["memory"] = self.request_memory
if self.request_cpu is not None:
result["requests"]["cpu"] = self.request_cpu
if self.limit_memory is not None:
result["limits"]["memory"] = self.limit_memory
if self.limit_cpu is not None:
result["limits"]["cpu"] = self.limit_cpu
return k8s.V1ResourceRequirements(**result)
``` | 1.0 | kubenetes resources bug - When we upgrade airflow version, this will be updated.
```
# this is a hack
# as of airflow 1.10.11, setting resources key seems to have a bug
# https://github.com/apache/airflow/issues/9827
# it adds superfluous keys to the specs with nulls as values and k8s does not like that
# it probably was fixed here: https://github.com/apache/airflow/pull/10084
# once we upgrade airflow, this hack will not be need anymore
class PatchedResources(Resources):
def to_k8s_client_obj(self):
result = dict(requests={}, limits={})
if self.request_memory is not None:
result["requests"]["memory"] = self.request_memory
if self.request_cpu is not None:
result["requests"]["cpu"] = self.request_cpu
if self.limit_memory is not None:
result["limits"]["memory"] = self.limit_memory
if self.limit_cpu is not None:
result["limits"]["cpu"] = self.limit_cpu
return k8s.V1ResourceRequirements(**result)
``` | test | kubenetes resources bug when we upgrade airflow version this will be updated this is a hack as of airflow setting resources key seems to have a bug it adds superfluous keys to the specs with nulls as values and does not like that it probably was fixed here once we upgrade airflow this hack will not be need anymore class patchedresources resources def to client obj self result dict requests limits if self request memory is not none result self request memory if self request cpu is not none result self request cpu if self limit memory is not none result self limit memory if self limit cpu is not none result self limit cpu return result | 1 |
8,784 | 12,290,278,078 | IssuesEvent | 2020-05-10 02:50:14 | topcoder-platform/challenge-engine-ui | https://api.github.com/repos/topcoder-platform/challenge-engine-ui | closed | Last saved is not showing in 24hour time zone | May 7 Bug Hunt Not a requirement Rejected User Interface | **Describe the bug**
Last saved is not showing in 24 hour time zone
**To Reproduce / Actual Behavior**
Steps to reproduce the behavior:
1. Go to 'https://challenges.topcoder-dev.com '
2. Login as jcori / appirio123
3. Edit any challenge.
4. View the saved by timezone
**Expected behavior**
Last saved should show in 24hour time zone
**Screenshots or Video**


**Desktop (please complete the following information):**
- OS: [Windows 10]
- Browser [e.g. chrome]
- Version [81]
**Additional context**
Add any other context about the problem here.
| 1.0 | Last saved is not showing in 24hour time zone - **Describe the bug**
Last saved is not showing in 24 hour time zone
**To Reproduce / Actual Behavior**
Steps to reproduce the behavior:
1. Go to 'https://challenges.topcoder-dev.com '
2. Login as jcori / appirio123
3. Edit any challenge.
4. View the saved by timezone
**Expected behavior**
Last saved should show in 24hour time zone
**Screenshots or Video**


**Desktop (please complete the following information):**
- OS: [Windows 10]
- Browser [e.g. chrome]
- Version [81]
**Additional context**
Add any other context about the problem here.
| non_test | last saved is not showing in time zone describe the bug last saved is not showing in hour time zone to reproduce actual behavior steps to reproduce the behavior go to login as jcori edit any challenge view the saved by timezone expected behavior last saved should show in time zone screenshots or video desktop please complete the following information os browser version additional context add any other context about the problem here | 0 |
197,936 | 14,950,348,716 | IssuesEvent | 2021-01-26 12:57:38 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | closed | cmdLineTester_jvmtitests cma001 failing / crashing | blocker comp:vm os:aix segfault test failure | https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR/671/
cmdLineTester_jvmtitests_5
cmdLineTester_jvmtitests_7
cmdLineTester_jvmtitests_8
```
Testing: cma001
Test start time: 2020/10/28 18:16:40 Eastern Standard Time
Running command: "/home/jenkins/workspace/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR_testList_0/openjdkbinary/j2sdk-image/bin/java" -Xgcpolicy:metronome -Xcompressedrefs -Xdump -XX:ForceClassfileAsIntermediateData -agentlib:jvmtitest=test:ria001,args:V3 -agentlib:jvmtitest=test:rca001,args:V4 -agentlib:jvmtitest=test:cma001 -cp "/home/jenkins/workspace/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR_testList_0/openjdk-tests/TKG/../../jvmtest/functional/cmdLineTests/jvmtitests/jvmtitest.jar" com.ibm.jvmti.tests.util.TestRunner
Time spent starting: 20 milliseconds
Time spent executing: 9829 milliseconds
Test result: FAILED
Output from test:
>> Success condition was not found: [Return code: 0]
```
from last nightly build
OpenJ9: fbbb2d6
OMR: e2fac34
OpenJDK8: bea7d86
from failing build
OpenJ9: dd76180
OMR: 4d32dfd
OpenJDK: 8425b0a
https://github.com/eclipse/openj9/compare/fbbb2d6...dd76180
Assuming it's not an OMR issue due to the next comment, where the same tests failed in a PR build without the latest OMR.
The OpenJDK changes don't seem related.
https://github.com/ibmruntimes/openj9-openjdk-jdk8/compare/bea7d86...8425b0a
The same tests are crashing on jdk11, similarly to the PR testing in the next comment.
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.functional_ppc64_aix_OMR/676
```
[2020-10-28T23:04:21.081Z] [ERR] mainSynchSignalHandler+0x704
[2020-10-28T23:04:21.081Z] [ERR] +0x0
[2020-10-28T23:04:21.081Z] [ERR] Unhandled exception
[2020-10-28T23:04:21.081Z] [ERR] Type=Segmentation error vmState=0x0005ffff
[2020-10-28T23:04:21.081Z] [ERR] J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000032
[2020-10-28T23:04:21.081Z] [ERR] Handler1=09001000A0F87278 Handler2=09001000A10FC580
[2020-10-28T23:04:21.081Z] [ERR] R0=0000000000000000 R1=0000010021CAA060 R2=08001000A01A6A18 R3=0000010021CAA170
[2020-10-28T23:04:21.081Z] [ERR] R4=0000000000000000 R5=0000002100000040 R6=000000000026259F R7=0000000000000000
[2020-10-28T23:04:21.081Z] [ERR] R8=00000000154F01D7 R9=0000002100000040 R10=8000000000001032 R11=0000000000000000
[2020-10-28T23:04:21.081Z] [ERR] R12=090000000EDA3B68 R13=0000010021CBA800 R14=0000010021CADC10 R15=0000010021CADAF8
[2020-10-28T23:04:21.081Z] [ERR] R16=0000010021CADAFC R17=0000000000000080 R18=0000010021CADB00 R19=0000010021CABF77
[2020-10-28T23:04:21.081Z] [ERR] R20=0000010021CABEF0 R21=0000010021CABEF4 R22=0000010021CAE7F8 R23=09001000A10FC580
[2020-10-28T23:04:21.081Z] [ERR] R24=09001000A10FE350 R25=0000000000000018 R26=0000000000000018 R27=0000010021CAA170
[2020-10-28T23:04:21.081Z] [ERR] R28=0000000000000000 R29=0000000000000000 R30=090000000F779E40 R31=09001000A0005420
[2020-10-28T23:04:21.081Z] [ERR] IAR=090000000EDA3CE4 LR=090000000EDA3B8C MSR=A00000000000D032 CTR=090000000EDA3C80
[2020-10-28T23:04:21.081Z] [ERR] CR=2220428820000001 FPSCR=0000000000000000 XER=2000000100000000
[2020-10-28T23:04:21.081Z] [ERR] FPR0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR1 c3e0000000000000 (f: 0.000000, d: -9.223372e+18)
[2020-10-28T23:04:21.081Z] [ERR] FPR2 41cdcd6500000000 (f: 0.000000, d: 1.000000e+09)
[2020-10-28T23:04:21.081Z] [ERR] FPR3 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR5 c3e0000000000000 (f: 0.000000, d: -9.223372e+18)
[2020-10-28T23:04:21.081Z] [ERR] FPR6 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR7 412e848000000000 (f: 0.000000, d: 1.000000e+06)
[2020-10-28T23:04:21.081Z] [ERR] FPR8 3ff0000000000000 (f: 0.000000, d: 1.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR9 4530000000000000 (f: 0.000000, d: 1.934281e+25)
[2020-10-28T23:04:21.081Z] [ERR] FPR10 412e848000000000 (f: 0.000000, d: 1.000000e+06)
[2020-10-28T23:04:21.081Z] [ERR] FPR11 43300000000f4240 (f: 1000000.000000, d: 4.503600e+15)
[2020-10-28T23:04:21.081Z] [ERR] FPR12 4530000000000000 (f: 0.000000, d: 1.934281e+25)
[2020-10-28T23:04:21.081Z] [ERR] FPR13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR16 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR17 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR18 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR19 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR20 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR21 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR22 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR23 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR24 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR25 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR26 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR27 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR28 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR29 0000000000000000 (f: 0.000000, d: 0.000000e+00)
``` | 1.0 | cmdLineTester_jvmtitests cma001 failing / crashing - https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR/671/
cmdLineTester_jvmtitests_5
cmdLineTester_jvmtitests_7
cmdLineTester_jvmtitests_8
```
Testing: cma001
Test start time: 2020/10/28 18:16:40 Eastern Standard Time
Running command: "/home/jenkins/workspace/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR_testList_0/openjdkbinary/j2sdk-image/bin/java" -Xgcpolicy:metronome -Xcompressedrefs -Xdump -XX:ForceClassfileAsIntermediateData -agentlib:jvmtitest=test:ria001,args:V3 -agentlib:jvmtitest=test:rca001,args:V4 -agentlib:jvmtitest=test:cma001 -cp "/home/jenkins/workspace/Test_openjdk8_j9_sanity.functional_ppc64_aix_OMR_testList_0/openjdk-tests/TKG/../../jvmtest/functional/cmdLineTests/jvmtitests/jvmtitest.jar" com.ibm.jvmti.tests.util.TestRunner
Time spent starting: 20 milliseconds
Time spent executing: 9829 milliseconds
Test result: FAILED
Output from test:
>> Success condition was not found: [Return code: 0]
```
from last nightly build
OpenJ9: fbbb2d6
OMR: e2fac34
OpenJDK8: bea7d86
from failing build
OpenJ9: dd76180
OMR: 4d32dfd
OpenJDK: 8425b0a
https://github.com/eclipse/openj9/compare/fbbb2d6...dd76180
Assuming it's not an OMR issue due to the next comment, where the same tests failed in a PR build without the latest OMR.
The OpenJDK changes don't seem related.
https://github.com/ibmruntimes/openj9-openjdk-jdk8/compare/bea7d86...8425b0a
The same tests are crashing on jdk11, similarly to the PR testing in the next comment.
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.functional_ppc64_aix_OMR/676
```
[2020-10-28T23:04:21.081Z] [ERR] mainSynchSignalHandler+0x704
[2020-10-28T23:04:21.081Z] [ERR] +0x0
[2020-10-28T23:04:21.081Z] [ERR] Unhandled exception
[2020-10-28T23:04:21.081Z] [ERR] Type=Segmentation error vmState=0x0005ffff
[2020-10-28T23:04:21.081Z] [ERR] J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000032
[2020-10-28T23:04:21.081Z] [ERR] Handler1=09001000A0F87278 Handler2=09001000A10FC580
[2020-10-28T23:04:21.081Z] [ERR] R0=0000000000000000 R1=0000010021CAA060 R2=08001000A01A6A18 R3=0000010021CAA170
[2020-10-28T23:04:21.081Z] [ERR] R4=0000000000000000 R5=0000002100000040 R6=000000000026259F R7=0000000000000000
[2020-10-28T23:04:21.081Z] [ERR] R8=00000000154F01D7 R9=0000002100000040 R10=8000000000001032 R11=0000000000000000
[2020-10-28T23:04:21.081Z] [ERR] R12=090000000EDA3B68 R13=0000010021CBA800 R14=0000010021CADC10 R15=0000010021CADAF8
[2020-10-28T23:04:21.081Z] [ERR] R16=0000010021CADAFC R17=0000000000000080 R18=0000010021CADB00 R19=0000010021CABF77
[2020-10-28T23:04:21.081Z] [ERR] R20=0000010021CABEF0 R21=0000010021CABEF4 R22=0000010021CAE7F8 R23=09001000A10FC580
[2020-10-28T23:04:21.081Z] [ERR] R24=09001000A10FE350 R25=0000000000000018 R26=0000000000000018 R27=0000010021CAA170
[2020-10-28T23:04:21.081Z] [ERR] R28=0000000000000000 R29=0000000000000000 R30=090000000F779E40 R31=09001000A0005420
[2020-10-28T23:04:21.081Z] [ERR] IAR=090000000EDA3CE4 LR=090000000EDA3B8C MSR=A00000000000D032 CTR=090000000EDA3C80
[2020-10-28T23:04:21.081Z] [ERR] CR=2220428820000001 FPSCR=0000000000000000 XER=2000000100000000
[2020-10-28T23:04:21.081Z] [ERR] FPR0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR1 c3e0000000000000 (f: 0.000000, d: -9.223372e+18)
[2020-10-28T23:04:21.081Z] [ERR] FPR2 41cdcd6500000000 (f: 0.000000, d: 1.000000e+09)
[2020-10-28T23:04:21.081Z] [ERR] FPR3 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR5 c3e0000000000000 (f: 0.000000, d: -9.223372e+18)
[2020-10-28T23:04:21.081Z] [ERR] FPR6 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR7 412e848000000000 (f: 0.000000, d: 1.000000e+06)
[2020-10-28T23:04:21.081Z] [ERR] FPR8 3ff0000000000000 (f: 0.000000, d: 1.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR9 4530000000000000 (f: 0.000000, d: 1.934281e+25)
[2020-10-28T23:04:21.081Z] [ERR] FPR10 412e848000000000 (f: 0.000000, d: 1.000000e+06)
[2020-10-28T23:04:21.081Z] [ERR] FPR11 43300000000f4240 (f: 1000000.000000, d: 4.503600e+15)
[2020-10-28T23:04:21.081Z] [ERR] FPR12 4530000000000000 (f: 0.000000, d: 1.934281e+25)
[2020-10-28T23:04:21.081Z] [ERR] FPR13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR16 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR17 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR18 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR19 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR20 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR21 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR22 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR23 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR24 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR25 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR26 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR27 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR28 0000000000000000 (f: 0.000000, d: 0.000000e+00)
[2020-10-28T23:04:21.081Z] [ERR] FPR29 0000000000000000 (f: 0.000000, d: 0.000000e+00)
``` | test | cmdlinetester jvmtitests failing crashing cmdlinetester jvmtitests cmdlinetester jvmtitests cmdlinetester jvmtitests testing test start time eastern standard time running command home jenkins workspace test sanity functional aix omr testlist openjdkbinary image bin java xgcpolicy metronome xcompressedrefs xdump xx forceclassfileasintermediatedata agentlib jvmtitest test args agentlib jvmtitest test args agentlib jvmtitest test cp home jenkins workspace test sanity functional aix omr testlist openjdk tests tkg jvmtest functional cmdlinetests jvmtitests jvmtitest jar com ibm jvmti tests util testrunner time spent starting milliseconds time spent executing milliseconds test result failed output from test success condition was not found from last nightly build omr from failing build omr openjdk assuming it s not an omr issue due to the next comment where the same tests failed in a pr build without the latest omr the openjdk changes don t seem related the same tests are crashing on similarly to the pr testing in the next comment mainsynchsignalhandler unhandled exception type segmentation error vmstate signal number signal number error value signal code iar lr msr ctr cr fpscr xer f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d | 1 |
333,327 | 10,120,605,115 | IssuesEvent | 2019-07-31 14:03:27 | Lembas-Modding-Team/pvp-mode | https://api.github.com/repos/Lembas-Modding-Team/pvp-mode | closed | OnPvP compatibility event. | cleanup compatibility medium priority | This would be a new compatibility event which would trigger when a Player attacks another Player.
This event could block/disable that attack from happening. | 1.0 | OnPvP compatibility event. - This would be a new compatibility event which would trigger when a Player attacks another Player.
This event could block/disable that attack from happening. | non_test | onpvp compatibility event this would be a new compatibility event which would trigger when a player attacks another player this event could block disable that attack from happening | 0 |
413,080 | 12,060,196,767 | IssuesEvent | 2020-04-15 20:43:41 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | closed | storage: stream error: stream ID 3319; INTERNAL_ERROR | api: storage priority: p2 type: bug | **Client**
golang google cloud storage
**Environment**
Alpine Docker on GKE
`golang:1.12.5-alpine3.9`
**Code**
```
func (w *Worker) handleCheckExists(_ *CheckExist, req *Request) (exists bool, err error) {
newCtx, cancel := context.WithDeadline(req.ctx, time.Now().Add(60 * time.Second))
defer cancel()
obj := w.client.Bucket(req.gsPath.Bucket).Object(req.gsPath.Key)
var reader *storage.Reader
reader, err = obj.NewReader(newCtx)
if err != nil {
if err == storage.ErrObjectNotExist {
exists = false
err = nil
return
}
// else, an actual error
return
}
defer reader.Close()
exists = true
return
}
```
**Expected behavior**
No errors or hanging
**Actual behavior**
After a little while of relatively high load, existence checks hang seemingly indefinitely (without the WithDeadline context that is). Restarting the existence checks can occasionally yields errors like:
```
`Get https://storage.googleapis.com/...: stream error: stream ID 3319; INTERNAL_ERROR`
```
Once requests for a worker starts yield `INTERNAL_ERROR`, it looks like all subsequent requests yield the same error
**Additional context**
This appears to start occurring after fairly high load when checking existence of objects at a high rate.
`go.mod` file:
```
require (
cloud.google.com/go v0.52.0
cloud.google.com/go/bigquery v1.4.0 // indirect
cloud.google.com/go/storage v1.5.0
```
| 1.0 | storage: stream error: stream ID 3319; INTERNAL_ERROR - **Client**
golang google cloud storage
**Environment**
Alpine Docker on GKE
`golang:1.12.5-alpine3.9`
**Code**
```
func (w *Worker) handleCheckExists(_ *CheckExist, req *Request) (exists bool, err error) {
newCtx, cancel := context.WithDeadline(req.ctx, time.Now().Add(60 * time.Second))
defer cancel()
obj := w.client.Bucket(req.gsPath.Bucket).Object(req.gsPath.Key)
var reader *storage.Reader
reader, err = obj.NewReader(newCtx)
if err != nil {
if err == storage.ErrObjectNotExist {
exists = false
err = nil
return
}
// else, an actual error
return
}
defer reader.Close()
exists = true
return
}
```
**Expected behavior**
No errors or hanging
**Actual behavior**
After a little while of relatively high load, existence checks hang seemingly indefinitely (without the WithDeadline context that is). Restarting the existence checks can occasionally yields errors like:
```
`Get https://storage.googleapis.com/...: stream error: stream ID 3319; INTERNAL_ERROR`
```
Once requests for a worker starts yield `INTERNAL_ERROR`, it looks like all subsequent requests yield the same error
**Additional context**
This appears to start occurring after fairly high load when checking existence of objects at a high rate.
`go.mod` file:
```
require (
cloud.google.com/go v0.52.0
cloud.google.com/go/bigquery v1.4.0 // indirect
cloud.google.com/go/storage v1.5.0
```
| non_test | storage stream error stream id internal error client golang google cloud storage environment alpine docker on gke golang code func w worker handlecheckexists checkexist req request exists bool err error newctx cancel context withdeadline req ctx time now add time second defer cancel obj w client bucket req gspath bucket object req gspath key var reader storage reader reader err obj newreader newctx if err nil if err storage errobjectnotexist exists false err nil return else an actual error return defer reader close exists true return expected behavior no errors or hanging actual behavior after a little while of relatively high load existence checks hang seemingly indefinitely without the withdeadline context that is restarting the existence checks can occasionally yields errors like get stream error stream id internal error once requests for a worker starts yield internal error it looks like all subsequent requests yield the same error additional context this appears to start occurring after fairly high load when checking existence of objects at a high rate go mod file require cloud google com go cloud google com go bigquery indirect cloud google com go storage | 0 |
210,411 | 16,099,560,207 | IssuesEvent | 2021-04-27 07:33:40 | SAPDocuments/Issues | https://api.github.com/repos/SAPDocuments/Issues | closed | Expose Integration Flow Endpoint as API and Test the Flow | 2021 High-Prio SCPTest-2104A SCPTest-cloudin SCPTest-trial1 SCPTest-trial2 SCPTest-trial3 | Tutorials: https://developers.sap.com/tutorials/cp-starter-isuite-api-management.html
Step 4: Assign policy template
--------------------------
Here the sentence should be rephrased to "you need the content of the service key **copied in Step 1: Copy credentials from service key.**" Since we no longer create the service key manually, Step1: Create service instance and key step is not there.

Best Regards,
Priyanka
| 5.0 | Expose Integration Flow Endpoint as API and Test the Flow - Tutorials: https://developers.sap.com/tutorials/cp-starter-isuite-api-management.html
Step 4: Assign policy template
--------------------------
Here the sentence should be rephrased to "you need the content of the service key **copied in Step 1: Copy credentials from service key.**" Since we no longer create the service key manually, Step1: Create service instance and key step is not there.

Best Regards,
Priyanka
| test | expose integration flow endpoint as api and test the flow tutorials step assign policy template here the sentence should be rephrased to you need the content of the service key copied in step copy credentials from service key since we no longer create the service key manually create service instance and key step is not there best regards priyanka | 1 |
334,151 | 29,820,636,225 | IssuesEvent | 2023-06-17 02:17:53 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix ndarray.test_numpy_instance_pos__ | NumPy Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix ndarray.test_numpy_instance_pos__ - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5282363223/jobs/9557172333"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix ndarray test numpy instance pos numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 1 |
317,157 | 9,661,499,465 | IssuesEvent | 2019-05-20 18:12:17 | MyMICDS/MyMICDS-v2 | https://api.github.com/repos/MyMICDS/MyMICDS-v2 | opened | Snowday Calculator Actually Broke | effort: medium priority: urgent work length: medium | This one isn't a joke. The snowday calculator website has always have a few problems. Perhaps we need need update an endpoint or move to another site completely?
>**Snowday Calculator Error!** There was a problem querying the Snowday Calculator! Try refreshing the page to fix any problems.
 | 1.0 | Snowday Calculator Actually Broke - This one isn't a joke. The snowday calculator website has always have a few problems. Perhaps we need need update an endpoint or move to another site completely?
>**Snowday Calculator Error!** There was a problem querying the Snowday Calculator! Try refreshing the page to fix any problems.
 | non_test | snowday calculator actually broke this one isn t a joke the snowday calculator website has always have a few problems perhaps we need need update an endpoint or move to another site completely snowday calculator error there was a problem querying the snowday calculator try refreshing the page to fix any problems | 0 |
774,167 | 27,185,345,208 | IssuesEvent | 2023-02-19 05:45:57 | Reyder95/Project-Vultura-3D-Unity | https://api.github.com/repos/Reyder95/Project-Vultura-3D-Unity | closed | List click events should be PointerDown events | bug medium priority ready for development user interface | Click events work when the mouse clicks then releases. But pointer down events work on pointer down. This should be what we use for the entity lists, as well as every other listview (station storage). | 1.0 | List click events should be PointerDown events - Click events work when the mouse clicks then releases. But pointer down events work on pointer down. This should be what we use for the entity lists, as well as every other listview (station storage). | non_test | list click events should be pointerdown events click events work when the mouse clicks then releases but pointer down events work on pointer down this should be what we use for the entity lists as well as every other listview station storage | 0 |
4,942 | 2,764,238,676 | IssuesEvent | 2015-04-29 14:32:51 | mozilla/webmaker-app | https://api.github.com/repos/mozilla/webmaker-app | opened | UX for z-index | design | Tapping an already selected item is how I've expected us to handle shifting elements to the top of z-index stack.
> I'm not convinced about tapping to bring things forward. It doesn't solve things being stuck behind other things. Something we should consider. - @flukeout https://github.com/mozilla/webmaker-app/issues/1524#issuecomment-96036691
One opinionated default we could include: text and buttons should always be in front of images. This might introduce some edge case problems for authors, but it would likely solve more problems than it would cause. | 1.0 | UX for z-index - Tapping an already selected item is how I've expected us to handle shifting elements to the top of z-index stack.
> I'm not convinced about tapping to bring things forward. It doesn't solve things being stuck behind other things. Something we should consider. - @flukeout https://github.com/mozilla/webmaker-app/issues/1524#issuecomment-96036691
One opinionated default we could include: text and buttons should always be in front of images. This might introduce some edge case problems for authors, but it would likely solve more problems than it would cause. | non_test | ux for z index tapping an already selected item is how i ve expected us to handle shifting elements to the top of z index stack i m not convinced about tapping to bring things forward it doesn t solve things being stuck behind other things something we should consider flukeout one opinionated default we could include text and buttons should always be in front of images this might introduce some edge case problems for authors but it would likely solve more problems than it would cause | 0 |
180,708 | 13,943,730,210 | IssuesEvent | 2020-10-22 23:56:03 | ddudt/DESC | https://api.github.com/repos/ddudt/DESC | closed | Set up automated testing | Urgent good first issue help wanted testing | We really need at least some basic tests for core functionality that we can automate with pytest and travis. This will also help with detecting issues as jax changes, by running tests against multiple versions of all the dependencies: https://docs.travis-ci.com/user/languages/python/#dependency-management
Here's a basic template: https://docs.python.org/3/library/unittest.html
and an example: https://github.com/f0uriest/GASTOp/blob/master/tests/test_fitness.py | 1.0 | Set up automated testing - We really need at least some basic tests for core functionality that we can automate with pytest and travis. This will also help with detecting issues as jax changes, by running tests against multiple versions of all the dependencies: https://docs.travis-ci.com/user/languages/python/#dependency-management
Here's a basic template: https://docs.python.org/3/library/unittest.html
and an example: https://github.com/f0uriest/GASTOp/blob/master/tests/test_fitness.py | test | set up automated testing we really need at least some basic tests for core functionality that we can automate with pytest and travis this will also help with detecting issues as jax changes by running tests against multiple versions of all the dependencies here s a basic template and an example | 1 |
62,365 | 7,574,388,609 | IssuesEvent | 2018-04-23 20:49:07 | JonathanMai/Keepers-web-app | https://api.github.com/repos/JonathanMai/Keepers-web-app | closed | BUG: Line chart renders multiple time. | Bug Design | Chart will update multiple times when changing tab and when first time getting inside. | 1.0 | BUG: Line chart renders multiple time. - Chart will update multiple times when changing tab and when first time getting inside. | non_test | bug line chart renders multiple time chart will update multiple times when changing tab and when first time getting inside | 0 |
99,907 | 12,487,446,717 | IssuesEvent | 2020-05-31 09:07:52 | zooniverse/front-end-monorepo | https://api.github.com/repos/zooniverse/front-end-monorepo | opened | Project workflow buttons are too small on phone screens | bug design | **Package**
app-project
**Describe the bug**
Workflow links look too small on the project home page. Compare the yellow link here with the Join In link (to Talk) further down the page.
 | 1.0 | Project workflow buttons are too small on phone screens - **Package**
app-project
**Describe the bug**
Workflow links look too small on the project home page. Compare the yellow link here with the Join In link (to Talk) further down the page.
 | non_test | project workflow buttons are too small on phone screens package app project describe the bug workflow links look too small on the project home page compare the yellow link here with the join in link to talk further down the page | 0 |
341,193 | 30,571,879,241 | IssuesEvent | 2023-07-20 23:22:47 | warriordog/ActivityPubSharp | https://api.github.com/repos/warriordog/ActivityPubSharp | closed | Implement a ResettableLazy type for unit test purposes | type:feature good first issue area:tests | Several tests classes construct a Lazy<T> whenever a value under test is set, which requires the callback to be specified at that location. However, the Lazy<T> *also* has to have a default value, which must be set in the constructor. Implement a wrapper around `Lazy<T>` that exposes a `Reset()` method to avoid code duplication. | 1.0 | Implement a ResettableLazy type for unit test purposes - Several tests classes construct a Lazy<T> whenever a value under test is set, which requires the callback to be specified at that location. However, the Lazy<T> *also* has to have a default value, which must be set in the constructor. Implement a wrapper around `Lazy<T>` that exposes a `Reset()` method to avoid code duplication. | test | implement a resettablelazy type for unit test purposes several tests classes construct a lazy whenever a value under test is set which requires the callback to be specified at that location however the lazy also has to have a default value which must be set in the constructor implement a wrapper around lazy that exposes a reset method to avoid code duplication | 1 |
652,744 | 21,560,420,635 | IssuesEvent | 2022-05-01 04:16:21 | msoe-vex/WebDashboard | https://api.github.com/repos/msoe-vex/WebDashboard | closed | Random Sudden Deceleration on Path in Web Dashboard | bug high priority Web Team Needs Grooming | When making a Path in the Web Dashboard, occasionally the second to last point in a spline will have a much lower speed than the surrounding points.
REPRODUCE: Make at least 1 new waypoint (at least two splines) and move the second waypoint around until you see occasional red on the spline near the second waypoint. Finesse it until the resting state of the path has the bugged red acceleration point. | 1.0 | Random Sudden Deceleration on Path in Web Dashboard - When making a Path in the Web Dashboard, occasionally the second to last point in a spline will have a much lower speed than the surrounding points.
REPRODUCE: Make at least 1 new waypoint (at least two splines) and move the second waypoint around until you see occasional red on the spline near the second waypoint. Finesse it until the resting state of the path has the bugged red acceleration point. | non_test | random sudden deceleration on path in web dashboard when making a path in the web dashboard occasionally the second to last point in a spline will have a much lower speed than the surrounding points reproduce make at least new waypoint at least two splines and move the second waypoint around until you see occasional red on the spline near the second waypoint finesse it until the resting state of the path has the bugged red acceleration point | 0 |
351,449 | 32,001,011,929 | IssuesEvent | 2023-09-21 12:17:38 | ITU-BDSA23-GROUP21/Chirp | https://api.github.com/repos/ITU-BDSA23-GROUP21/Chirp | closed | Add integration tests | test | As a developer i want make sure different components of the program can work together by making integration tests
Acceptance criteria:
- Add integration tests that test your CSV database library works as intended. | 1.0 | Add integration tests - As a developer i want make sure different components of the program can work together by making integration tests
Acceptance criteria:
- Add integration tests that test your CSV database library works as intended. | test | add integration tests as a developer i want make sure different components of the program can work together by making integration tests acceptance criteria add integration tests that test your csv database library works as intended | 1 |
111,925 | 4,494,946,315 | IssuesEvent | 2016-08-31 08:26:13 | Tribler/tribler | https://api.github.com/repos/Tribler/tribler | closed | Error on Linux when running policy tests | blocker bug _Top Priority_ | Stacktrace:
```
File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
File "/home/jenkins/workspace/GH_Tribler_PR_tests_linux/tribler/Tribler/Test/test_as_server.py", line 81, in check
result = fun(*argv, **kwargs)
File "/home/jenkins/workspace/GH_Tribler_PR_tests_linux/tribler/Tribler/Test/Core/CreditMining/test_creditmining.py", line 52, in test_random_policy
self.assertEqual(3, len(ids_start), "Start failed %s vs %s" % (ids_start, torrents_start))
File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"Start failed [4, 2] vs [{'num_seeders': 4, 'creation_date': 4, 'num_leechers': 3, 'metainfo': <Tribler.Test.Core.CreditMining.mock_creditmining.MockMeta object at 0x7fb437f3bf50>}, {'num_seeders': 2, 'creation_date': 2, 'num_leechers': 1, 'metainfo': <Tribler.Test.Core.CreditMining.mock_creditmining.MockMeta object at 0x7fb437f3bf10>}]
```
Reference: https://jenkins.tribler.org/job/GH_Tribler_PR_tests_linux/2281/testReport/junit/Tribler.Test.Core.CreditMining.test_creditmining/TestBoostingManagerPolicies/test_random_policy/ | 1.0 | Error on Linux when running policy tests - Stacktrace:
```
File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
File "/home/jenkins/workspace/GH_Tribler_PR_tests_linux/tribler/Tribler/Test/test_as_server.py", line 81, in check
result = fun(*argv, **kwargs)
File "/home/jenkins/workspace/GH_Tribler_PR_tests_linux/tribler/Tribler/Test/Core/CreditMining/test_creditmining.py", line 52, in test_random_policy
self.assertEqual(3, len(ids_start), "Start failed %s vs %s" % (ids_start, torrents_start))
File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"Start failed [4, 2] vs [{'num_seeders': 4, 'creation_date': 4, 'num_leechers': 3, 'metainfo': <Tribler.Test.Core.CreditMining.mock_creditmining.MockMeta object at 0x7fb437f3bf50>}, {'num_seeders': 2, 'creation_date': 2, 'num_leechers': 1, 'metainfo': <Tribler.Test.Core.CreditMining.mock_creditmining.MockMeta object at 0x7fb437f3bf10>}]
```
Reference: https://jenkins.tribler.org/job/GH_Tribler_PR_tests_linux/2281/testReport/junit/Tribler.Test.Core.CreditMining.test_creditmining/TestBoostingManagerPolicies/test_random_policy/ | non_test | error on linux when running policy tests stacktrace file usr lib unittest case py line in run testmethod file home jenkins workspace gh tribler pr tests linux tribler tribler test test as server py line in check result fun argv kwargs file home jenkins workspace gh tribler pr tests linux tribler tribler test core creditmining test creditmining py line in test random policy self assertequal len ids start start failed s vs s ids start torrents start file usr lib unittest case py line in assertequal assertion func first second msg msg file usr lib unittest case py line in baseassertequal raise self failureexception msg start failed vs reference | 0 |
22,903 | 3,727,389,438 | IssuesEvent | 2016-03-06 08:05:05 | godfather1103/mentohust | https://api.github.com/repos/godfather1103/mentohust | closed | mentohust 不支持v4算法 | auto-migrated Priority-Medium Type-Defect | ```
现在很多学校以升级锐捷到4.69以上 采取v4校检算法
mentohust只能支持到v3 所以期望能更新啊 另:华科已阵亡
```
Original issue reported on code.google.com by `ptbs...@gmail.com` on 5 May 2013 at 5:09 | 1.0 | mentohust 不支持v4算法 - ```
现在很多学校以升级锐捷到4.69以上 采取v4校检算法
mentohust只能支持到v3 所以期望能更新啊 另:华科已阵亡
```
Original issue reported on code.google.com by `ptbs...@gmail.com` on 5 May 2013 at 5:09 | non_test | mentohust 所以期望能更新啊 另:华科已阵亡 original issue reported on code google com by ptbs gmail com on may at | 0 |
1,150 | 2,577,704,557 | IssuesEvent | 2015-02-12 18:39:08 | bedops/bedops | https://api.github.com/repos/bedops/bedops | closed | Fix bedops --stagger and --exclude docs and push v2.4.8 tag forwards along master | bug documentation enhancement v2p4p8 | Stagger section header needs short-form option added:
http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html#stagger-stagger
Exclude section header needs long-form option added:
http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html#exclude-x
Also, as current doc updates are being pushed to master branch, we need to push v2.4.8 tag forwards:
```
$ git tag -f -a vX.Y.Z -m 'pushed current version tag forwards to latest commit'
...
$ git push -f --tags
...
``` | 1.0 | Fix bedops --stagger and --exclude docs and push v2.4.8 tag forwards along master - Stagger section header needs short-form option added:
http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html#stagger-stagger
Exclude section header needs long-form option added:
http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html#exclude-x
Also, as current doc updates are being pushed to master branch, we need to push v2.4.8 tag forwards:
```
$ git tag -f -a vX.Y.Z -m 'pushed current version tag forwards to latest commit'
...
$ git push -f --tags
...
``` | non_test | fix bedops stagger and exclude docs and push tag forwards along master stagger section header needs short form option added exclude section header needs long form option added also as current doc updates are being pushed to master branch we need to push tag forwards git tag f a vx y z m pushed current version tag forwards to latest commit git push f tags | 0 |
113,587 | 9,657,627,615 | IssuesEvent | 2019-05-20 09:02:16 | mozilla-mobile/firefox-ios | https://api.github.com/repos/mozilla-mobile/firefox-ios | opened | [XCUITests] Modify tests to work with the new trailhead onboarding screen | Test Automation :robot: | There may be changes coming with the onboarding screen to align all firefox products. This issue is for tracking the work related to fixing the tests when needed | 1.0 | [XCUITests] Modify tests to work with the new trailhead onboarding screen - There may be changes coming with the onboarding screen to align all firefox products. This issue is for tracking the work related to fixing the tests when needed | test | modify tests to work with the new trailhead onboarding screen there may be changes coming with the onboarding screen to align all firefox products this issue is for tracking the work related to fixing the tests when needed | 1 |
289,522 | 24,995,356,796 | IssuesEvent | 2022-11-02 23:16:59 | microsoft/ebpf-for-windows | https://api.github.com/repos/microsoft/ebpf-for-windows | closed | TEST_CASE("ring_buffer_async_query", "[execution_context]") asserts under low memory | bug triaged tests low-memory | ```
29 000000e5`64fece00 00007ff6`ee6f3cf9 unit_tests!Catch::AssertionHandler::complete+0x5a [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_assertion_handler.cpp @ 57]
2a 000000e5`64fece40 00007ff6`ee83fae2 unit_tests!CATCH2_INTERNAL_TEST_26+0x8b9 [E:\ebpf-for-windows\libs\execution_context\unit\execution_context_unit_test.cpp @ 741]
2b 000000e5`64feeb80 00007ff6`ee837d01 unit_tests!Catch::TestInvokerAsFunction::invoke+0x12 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_test_case_registry_impl.cpp @ 150]
2c 000000e5`64feebb0 00007ff6`ee82fd77 unit_tests!Catch::TestCaseHandle::invoke+0x21 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_test_case_info.hpp @ 115]
2d 000000e5`64feebe0 00007ff6`ee82fba0 unit_tests!Catch::RunContext::invokeActiveTestCase+0x47 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 508]
2e 000000e5`64feec30 00007ff6`ee82dfec unit_tests!Catch::RunContext::runCurrentTest+0x250 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 475]
2f 000000e5`64feef80 00007ff6`eeae2457 unit_tests!Catch::RunContext::runTest+0x2bc [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 239]
30 000000e5`64fef3f0 00007ff6`eeae16c9 unit_tests!Catch::`anonymous namespace'::TestGroup::execute+0xe7 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 110]
31 000000e5`64fef550 00007ff6`eeae1160 unit_tests!Catch::Session::runInternal+0x409 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 335]
32 000000e5`64fef9b0 00007ff6`ee9a2032 unit_tests!Catch::Session::run+0x50 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 263]
33 000000e5`64fef9f0 00007ff6`ee9a1fa1 unit_tests!Catch::Session::run<char>+0x52 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.hpp @ 41]
34 000000e5`64fefa30 00007ff6`ee99efc9 unit_tests!main+0x61 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_main.cpp @ 36]
35 000000e5`64fefc50 00007ff6`ee99ef1e unit_tests!invoke_main+0x39 [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 79]
36 000000e5`64fefca0 00007ff6`ee99edde unit_tests!__scrt_common_main_seh+0x12e [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 288]
37 000000e5`64fefd10 00007ff6`ee99f03e unit_tests!__scrt_common_main+0xe [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 331]
38 000000e5`64fefd40 00007ff9`a4ed335d unit_tests!mainCRTStartup+0xe [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_main.cpp @ 17]
39 000000e5`64fefd70 00007ff9`a6777558 kernel32!BaseThreadInitThunk+0x1d
3a 000000e5`64fefda0 00000000`00000000 ntdll!RtlUserThreadStart+0x28
``` | 1.0 | TEST_CASE("ring_buffer_async_query", "[execution_context]") asserts under low memory - ```
29 000000e5`64fece00 00007ff6`ee6f3cf9 unit_tests!Catch::AssertionHandler::complete+0x5a [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_assertion_handler.cpp @ 57]
2a 000000e5`64fece40 00007ff6`ee83fae2 unit_tests!CATCH2_INTERNAL_TEST_26+0x8b9 [E:\ebpf-for-windows\libs\execution_context\unit\execution_context_unit_test.cpp @ 741]
2b 000000e5`64feeb80 00007ff6`ee837d01 unit_tests!Catch::TestInvokerAsFunction::invoke+0x12 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_test_case_registry_impl.cpp @ 150]
2c 000000e5`64feebb0 00007ff6`ee82fd77 unit_tests!Catch::TestCaseHandle::invoke+0x21 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_test_case_info.hpp @ 115]
2d 000000e5`64feebe0 00007ff6`ee82fba0 unit_tests!Catch::RunContext::invokeActiveTestCase+0x47 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 508]
2e 000000e5`64feec30 00007ff6`ee82dfec unit_tests!Catch::RunContext::runCurrentTest+0x250 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 475]
2f 000000e5`64feef80 00007ff6`eeae2457 unit_tests!Catch::RunContext::runTest+0x2bc [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_run_context.cpp @ 239]
30 000000e5`64fef3f0 00007ff6`eeae16c9 unit_tests!Catch::`anonymous namespace'::TestGroup::execute+0xe7 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 110]
31 000000e5`64fef550 00007ff6`eeae1160 unit_tests!Catch::Session::runInternal+0x409 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 335]
32 000000e5`64fef9b0 00007ff6`ee9a2032 unit_tests!Catch::Session::run+0x50 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.cpp @ 263]
33 000000e5`64fef9f0 00007ff6`ee9a1fa1 unit_tests!Catch::Session::run<char>+0x52 [E:\ebpf-for-windows\external\Catch2\src\catch2\catch_session.hpp @ 41]
34 000000e5`64fefa30 00007ff6`ee99efc9 unit_tests!main+0x61 [E:\ebpf-for-windows\external\Catch2\src\catch2\internal\catch_main.cpp @ 36]
35 000000e5`64fefc50 00007ff6`ee99ef1e unit_tests!invoke_main+0x39 [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 79]
36 000000e5`64fefca0 00007ff6`ee99edde unit_tests!__scrt_common_main_seh+0x12e [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 288]
37 000000e5`64fefd10 00007ff6`ee99f03e unit_tests!__scrt_common_main+0xe [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 331]
38 000000e5`64fefd40 00007ff9`a4ed335d unit_tests!mainCRTStartup+0xe [D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_main.cpp @ 17]
39 000000e5`64fefd70 00007ff9`a6777558 kernel32!BaseThreadInitThunk+0x1d
3a 000000e5`64fefda0 00000000`00000000 ntdll!RtlUserThreadStart+0x28
``` | test | test case ring buffer async query asserts under low memory unit tests catch assertionhandler complete unit tests internal test unit tests catch testinvokerasfunction invoke unit tests catch testcasehandle invoke unit tests catch runcontext invokeactivetestcase unit tests catch runcontext runcurrenttest unit tests catch runcontext runtest unit tests catch anonymous namespace testgroup execute unit tests catch session runinternal unit tests catch session run unit tests catch session run unit tests main unit tests invoke main unit tests scrt common main seh unit tests scrt common main unit tests maincrtstartup basethreadinitthunk ntdll rtluserthreadstart | 1 |
159,480 | 6,047,016,354 | IssuesEvent | 2017-06-12 13:34:45 | GoogleCloudPlatform/google-cloud-node | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-node | closed | Google-cloud/speech emitting close event after short period of inactivity | api: speech priority: p2+ status: acknowledged type: question | #### Environment details
- OS: Raspbian Jessie(with Pixel) & Mac OSX 10.11.4 (same issue on both)
- Node.js version: 6.10.0
- npm version: 3.10.10
- google-cloud-node version: 0.7.0 (using only google-speech)
- Electron Version: 1.6.2
#### Steps to reproduce
1. I've written an electron app that streams from the mic input to google speech on a hotword being detected which can be found in this file https://github.com/shekit/electron-voice.
2. Clone repo - https://github.com/shekit/electron-voice
3. `npm install -g electron@1.6.2`
4. `npm install --save nan`
5. `HOME=~/.electron-gyp npm install` in project folder
6. Place google speech `keyfile.json` in project folder
7. `electron main.js` (from project folder)
8. Say 'snowboy' followed by a command
#### The Issue
1. After a small period of inactivity if I speak something, I immediately receive the `close` event from the google detector and it doesn't transcribe anything. If I trigger it again, it transcribes successfully.
2. I only receive this `close` event after a short period of inactivity which I have found to be consistently ~4 minutes, but no error is triggered.
3. I'm explicitly calling `mic.unpipe(googlestream)` and `googlestream.end()` after I receive the final transcribed results. single_utterance is set to true.
Any idea why this would be happening? This does not seem to be a timeout error as I am listening for error events emitted by the google stream and no errors are being emitted.
Thanks!
| 1.0 | Google-cloud/speech emitting close event after short period of inactivity - #### Environment details
- OS: Raspbian Jessie(with Pixel) & Mac OSX 10.11.4 (same issue on both)
- Node.js version: 6.10.0
- npm version: 3.10.10
- google-cloud-node version: 0.7.0 (using only google-speech)
- Electron Version: 1.6.2
#### Steps to reproduce
1. I've written an electron app that streams from the mic input to google speech on a hotword being detected which can be found in this file https://github.com/shekit/electron-voice.
2. Clone repo - https://github.com/shekit/electron-voice
3. `npm install -g electron@1.6.2`
4. `npm install --save nan`
5. `HOME=~/.electron-gyp npm install` in project folder
6. Place google speech `keyfile.json` in project folder
7. `electron main.js` (from project folder)
8. Say 'snowboy' followed by a command
#### The Issue
1. After a small period of inactivity if I speak something, I immediately receive the `close` event from the google detector and it doesn't transcribe anything. If I trigger it again, it transcribes successfully.
2. I only receive this `close` event after a short period of inactivity which I have found to be consistently ~4 minutes, but no error is triggered.
3. I'm explicitly calling `mic.unpipe(googlestream)` and `googlestream.end()` after I receive the final transcribed results. single_utterance is set to true.
Any idea why this would be happening? This does not seem to be a timeout error as I am listening for error events emitted by the google stream and no errors are being emitted.
Thanks!
| non_test | google cloud speech emitting close event after short period of inactivity environment details os raspbian jessie with pixel mac osx same issue on both node js version npm version google cloud node version using only google speech electron version steps to reproduce i ve written an electron app that streams from the mic input to google speech on a hotword being detected which can be found in this file clone repo npm install g electron npm install save nan home electron gyp npm install in project folder place google speech keyfile json in project folder electron main js from project folder say snowboy followed by a command the issue after a small period of inactivity if i speak something i immediately receive the close event from the google detector and it doesn t transcribe anything if i trigger it again it transcribes successfully i only receive this close event after a short period of inactivity which i have found to be consistently minutes but no error is triggered i m explicitly calling mic unpipe googlestream and googlestream end after i receive the final transcribed results single utterance is set to true any idea why this would be happening this does not seem to be a timeout error as i am listening for error events emitted by the google stream and no errors are being emitted thanks | 0 |
148,184 | 11,841,365,354 | IssuesEvent | 2020-03-23 20:39:07 | wtbarnes/fiasco | https://api.github.com/repos/wtbarnes/fiasco | closed | Caching the test database | testing | As far as I can tell, running the tests attemptes to download and build a smaller version of the database in a temporary folder. If it could detect an already-present installation and use that instead it would save having to do this every time local tests are run. | 1.0 | Caching the test database - As far as I can tell, running the tests attemptes to download and build a smaller version of the database in a temporary folder. If it could detect an already-present installation and use that instead it would save having to do this every time local tests are run. | test | caching the test database as far as i can tell running the tests attemptes to download and build a smaller version of the database in a temporary folder if it could detect an already present installation and use that instead it would save having to do this every time local tests are run | 1 |
77,138 | 7,566,893,242 | IssuesEvent | 2018-04-22 02:26:29 | fga-gpp-mds/2018.1-Dulce_App | https://api.github.com/repos/fga-gpp-mds/2018.1-Dulce_App | closed | Visualizar setores | 0-MDS 1-Produto 2-Teste 3-História de Usuário 5-Iniciante | História:
Eu como **gerente de pessoas** desejo **visualizar a lista de setores do meu hospital** a fim de **ter acesso a todos os setores do hospital**
Critérios de aceitação:
- Lista com todos os setores do hospital do gerente de pessoas;
- Ao clicar na issue ir para lista de médicos do setor.
| 1.0 | Visualizar setores - História:
Eu como **gerente de pessoas** desejo **visualizar a lista de setores do meu hospital** a fim de **ter acesso a todos os setores do hospital**
Critérios de aceitação:
- Lista com todos os setores do hospital do gerente de pessoas;
- Ao clicar na issue ir para lista de médicos do setor.
| test | visualizar setores história eu como gerente de pessoas desejo visualizar a lista de setores do meu hospital a fim de ter acesso a todos os setores do hospital critérios de aceitação lista com todos os setores do hospital do gerente de pessoas ao clicar na issue ir para lista de médicos do setor | 1 |
157,629 | 13,710,127,446 | IssuesEvent | 2020-10-02 00:05:58 | Xithrius/SimplePasswordGenerator | https://api.github.com/repos/Xithrius/SimplePasswordGenerator | closed | The README needs to not be blank. | documentation enhancement good first issue | There's literally nothing there besides the title.
Inclusion of screenshots and how this thing actually works would be much appreciated. This project it pretty simple. | 1.0 | The README needs to not be blank. - There's literally nothing there besides the title.
Inclusion of screenshots and how this thing actually works would be much appreciated. This project it pretty simple. | non_test | the readme needs to not be blank there s literally nothing there besides the title inclusion of screenshots and how this thing actually works would be much appreciated this project it pretty simple | 0 |
203,624 | 15,886,059,674 | IssuesEvent | 2021-04-09 21:46:10 | carlosfruitcup/rhythm | https://api.github.com/repos/carlosfruitcup/rhythm | opened | Read before making a issue. | documentation question wontfix | Hey,
Since the project is open source and the code isn't very hard to understand,
this will be the first and last update this "game" will get.
Thank you and bye. | 1.0 | Read before making a issue. - Hey,
Since the project is open source and the code isn't very hard to understand,
this will be the first and last update this "game" will get.
Thank you and bye. | non_test | read before making a issue hey since the project is open source and the code isn t very hard to understand this will be the first and last update this game will get thank you and bye | 0 |
62,181 | 6,778,705,146 | IssuesEvent | 2017-10-28 14:21:14 | openbmc/openbmc-test-automation | https://api.github.com/repos/openbmc/openbmc-test-automation | opened | BMC reset extra validation post reset | bug Test | Check BMC SSH connection post reset.
This is to catch SSH intermittently hang issues | 1.0 | BMC reset extra validation post reset - Check BMC SSH connection post reset.
This is to catch SSH intermittently hang issues | test | bmc reset extra validation post reset check bmc ssh connection post reset this is to catch ssh intermittently hang issues | 1 |
304,107 | 9,321,269,311 | IssuesEvent | 2019-03-27 03:05:52 | evscott/Rambl | https://api.github.com/repos/evscott/Rambl | closed | Current trip information component | Medium priority | - What you should be doing right now / next
- Highlights (high priority)
- Notes
- Stats regarding the trip | 1.0 | Current trip information component - - What you should be doing right now / next
- Highlights (high priority)
- Notes
- Stats regarding the trip | non_test | current trip information component what you should be doing right now next highlights high priority notes stats regarding the trip | 0 |
151,916 | 19,668,169,097 | IssuesEvent | 2022-01-11 02:15:31 | ChoeMinji/opencv-4.5.2 | https://api.github.com/repos/ChoeMinji/opencv-4.5.2 | opened | CVE-2021-3479 (Medium) detected in opencv4.5.5 | security vulnerability | ## CVE-2021-3479 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opencv4.5.5</b></p></summary>
<p>
<p>Open Source Computer Vision Library</p>
<p>Library home page: <a href=https://github.com/opencv/opencv.git>https://github.com/opencv/opencv.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/opencv-4.5.2/commit/4d0323c7903ad5ab0565cfb99c5ff2b4ea4f1c53">4d0323c7903ad5ab0565cfb99c5ff2b4ea4f1c53</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/3rdparty/openexr/IlmImf/ImfInputFile.cpp</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/3rdparty/openexr/IlmImf/ImfInputFile.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There's a flaw in OpenEXR's Scanline API functionality in versions before 3.0.0-beta. An attacker who is able to submit a crafted file to be processed by OpenEXR could trigger excessive consumption of memory, resulting in an impact to system availability.
<p>Publish Date: 2021-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3479>CVE-2021-3479</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/AcademySoftwareFoundation/openexr/releases/tag/v2.5.4">https://github.com/AcademySoftwareFoundation/openexr/releases/tag/v2.5.4</a></p>
<p>Release Date: 2021-03-31</p>
<p>Fix Resolution: v2.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3479 (Medium) detected in opencv4.5.5 - ## CVE-2021-3479 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opencv4.5.5</b></p></summary>
<p>
<p>Open Source Computer Vision Library</p>
<p>Library home page: <a href=https://github.com/opencv/opencv.git>https://github.com/opencv/opencv.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/opencv-4.5.2/commit/4d0323c7903ad5ab0565cfb99c5ff2b4ea4f1c53">4d0323c7903ad5ab0565cfb99c5ff2b4ea4f1c53</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/3rdparty/openexr/IlmImf/ImfInputFile.cpp</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/3rdparty/openexr/IlmImf/ImfInputFile.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There's a flaw in OpenEXR's Scanline API functionality in versions before 3.0.0-beta. An attacker who is able to submit a crafted file to be processed by OpenEXR could trigger excessive consumption of memory, resulting in an impact to system availability.
<p>Publish Date: 2021-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3479>CVE-2021-3479</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/AcademySoftwareFoundation/openexr/releases/tag/v2.5.4">https://github.com/AcademySoftwareFoundation/openexr/releases/tag/v2.5.4</a></p>
<p>Release Date: 2021-03-31</p>
<p>Fix Resolution: v2.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in cve medium severity vulnerability vulnerable library open source computer vision library library home page a href found in head commit a href found in base branch master vulnerable source files openexr ilmimf imfinputfile cpp openexr ilmimf imfinputfile cpp vulnerability details there s a flaw in openexr s scanline api functionality in versions before beta an attacker who is able to submit a crafted file to be processed by openexr could trigger excessive consumption of memory resulting in an impact to system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
76,558 | 7,539,782,401 | IssuesEvent | 2018-04-17 02:31:44 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Cleanup testStartPaused in the observatory test launcher. | dev: tests prod: engine | Currently, there is inefficient polling and insufficient/absent error checking when [attempting to launch](https://github.com/flutter/engine/blob/37e5df053024c1158b436ff0cd843116c25bdf14/shell/testing/observatory/test.dart) the `flutter_tester`. | 1.0 | Cleanup testStartPaused in the observatory test launcher. - Currently, there is inefficient polling and insufficient/absent error checking when [attempting to launch](https://github.com/flutter/engine/blob/37e5df053024c1158b436ff0cd843116c25bdf14/shell/testing/observatory/test.dart) the `flutter_tester`. | test | cleanup teststartpaused in the observatory test launcher currently there is inefficient polling and insufficient absent error checking when the flutter tester | 1 |
306,175 | 26,442,395,963 | IssuesEvent | 2023-01-16 02:27:05 | vmware-tanzu/velero | https://api.github.com/repos/vmware-tanzu/velero | closed | Better to have customized IAM policy setting environment for Velero nightly | Need E2E Test Case | **Describe the problem/challenge you have**
[A description of the current limitation/problem/challenge that you are experiencing.]
Met a AWS environment issue with Velero v1.9.4.
AWS credential is created with limited IAM permission, but enough for Velero to work.
Restic backup failed by `restic init` with error `client.BucketExists: Access Denied`.
It is introduced due to Velero v1.9.4 upgrade integrated Restic from v0.13.1 to v0.14.0.
This [issue](https://github.com/restic/restic/issues/4085) is created to trace on Restic.
And the [original issue](https://github.com/restic/restic/issues/1477) describe the scenario better.
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
It's better that Velero can have some tests with resticted IAM permission scenario, so we can find defect earlier.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Velero version (use `velero version`): v1.9.4
- Kubernetes version (use `kubectl version`): v1.23
- Kubernetes installer & version: vSphere
- Cloud provider or hardware configuration: vSphere
- OS (e.g. from `/etc/os-release`): Ubuntu
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "The project would be better with this feature added"
- :-1: for "This feature will not enhance the project in a meaningful way"
| 1.0 | Better to have customized IAM policy setting environment for Velero nightly - **Describe the problem/challenge you have**
[A description of the current limitation/problem/challenge that you are experiencing.]
Met a AWS environment issue with Velero v1.9.4.
AWS credential is created with limited IAM permission, but enough for Velero to work.
Restic backup failed by `restic init` with error `client.BucketExists: Access Denied`.
It is introduced due to Velero v1.9.4 upgrade integrated Restic from v0.13.1 to v0.14.0.
This [issue](https://github.com/restic/restic/issues/4085) is created to trace on Restic.
And the [original issue](https://github.com/restic/restic/issues/1477) describe the scenario better.
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
It's better that Velero can have some tests with resticted IAM permission scenario, so we can find defect earlier.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Velero version (use `velero version`): v1.9.4
- Kubernetes version (use `kubectl version`): v1.23
- Kubernetes installer & version: vSphere
- Cloud provider or hardware configuration: vSphere
- OS (e.g. from `/etc/os-release`): Ubuntu
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "The project would be better with this feature added"
- :-1: for "This feature will not enhance the project in a meaningful way"
| test | better to have customized iam policy setting environment for velero nightly describe the problem challenge you have met a aws environment issue with velero aws credential is created with limited iam permission but enough for velero to work restic backup failed by restic init with error client bucketexists access denied it is introduced due to velero upgrade integrated restic from to this is created to trace on restic and the describe the scenario better describe the solution you d like it s better that velero can have some tests with resticted iam permission scenario so we can find defect earlier anything else you would like to add environment velero version use velero version kubernetes version use kubectl version kubernetes installer version vsphere cloud provider or hardware configuration vsphere os e g from etc os release ubuntu vote on this issue this is an invitation to the velero community to vote on issues you can see the project s use the reaction smiley face up to the right of this comment to vote for the project would be better with this feature added for this feature will not enhance the project in a meaningful way | 1 |
316,907 | 23,654,260,067 | IssuesEvent | 2022-08-26 09:39:43 | paulithu/Website_2.0. | https://api.github.com/repos/paulithu/Website_2.0. | closed | Abschlusspräsentation | documentation | Die Finale Präsentation für den Vorstellung Termin erstellen.
Aufteilen der Folien/ Texte
Üben der Folien und Texte | 1.0 | Abschlusspräsentation - Die Finale Präsentation für den Vorstellung Termin erstellen.
Aufteilen der Folien/ Texte
Üben der Folien und Texte | non_test | abschlusspräsentation die finale präsentation für den vorstellung termin erstellen aufteilen der folien texte üben der folien und texte | 0 |
122,838 | 10,238,675,608 | IssuesEvent | 2019-08-19 16:23:42 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | read_csv c engine accepts binary mode data and python engine rejects it | Compat IO CSV Testing good first issue | #### Code Sample
```python
import pandas as pd
if __name__ == "__main__":
with open('test.csv', 'w') as f:
f.write('1,2,3\n4,5,6')
with open('test.csv', 'rt') as f:
pd.read_csv(f, header=None)
with open('test.csv', 'rb') as f:
pd.read_csv(f, header=None)
with open('test.csv', 'rt') as f:
pd.read_csv(f, header=None, engine='python')
with open('test.csv', 'rb') as f:
pd.read_csv(f, header=None, engine='python')
```
#### Problem description
The second read_csv call (using the C engine and a file opened in binary mode) will correctly read the csv. The fourth read_csv call (using the Python engine and a file opened in binary mode) will throw an exception stating it needs to be in text mode:
```
pandas.errors.ParserError: iterator should return strings, not bytes (did you open the file in text mode?)
```
Perhaps this is intended behavior, but I found this difference in behavior between the engines surprising, as well as that binary mode was accepted at all.
#### Expected Output
Either the C engine rejecting binary mode files or the Python engine accepting them.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-39-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 10.0.1
setuptools: 39.1.0
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| 1.0 | read_csv c engine accepts binary mode data and python engine rejects it - #### Code Sample
```python
import pandas as pd
if __name__ == "__main__":
with open('test.csv', 'w') as f:
f.write('1,2,3\n4,5,6')
with open('test.csv', 'rt') as f:
pd.read_csv(f, header=None)
with open('test.csv', 'rb') as f:
pd.read_csv(f, header=None)
with open('test.csv', 'rt') as f:
pd.read_csv(f, header=None, engine='python')
with open('test.csv', 'rb') as f:
pd.read_csv(f, header=None, engine='python')
```
#### Problem description
The second read_csv call (using the C engine and a file opened in binary mode) will correctly read the csv. The fourth read_csv call (using the Python engine and a file opened in binary mode) will throw an exception stating it needs to be in text mode:
```
pandas.errors.ParserError: iterator should return strings, not bytes (did you open the file in text mode?)
```
Perhaps this is intended behavior, but I found this difference in behavior between the engines surprising, as well as that binary mode was accepted at all.
#### Expected Output
Either the C engine rejecting binary mode files or the Python engine accepting them.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-39-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 10.0.1
setuptools: 39.1.0
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| test | read csv c engine accepts binary mode data and python engine rejects it code sample python import pandas as pd if name main with open test csv w as f f write with open test csv rt as f pd read csv f header none with open test csv rb as f pd read csv f header none with open test csv rt as f pd read csv f header none engine python with open test csv rb as f pd read csv f header none engine python problem description the second read csv call using the c engine and a file opened in binary mode will correctly read the csv the fourth read csv call using the python engine and a file opened in binary mode will throw an exception stating it needs to be in text mode pandas errors parsererror iterator should return strings not bytes did you open the file in text mode perhaps this is intended behavior but i found this difference in behavior between the engines surprising as well as that binary mode was accepted at all expected output either the c engine rejecting binary mode files or the python engine accepting them output of pd show versions installed versions commit none python final python bits os linux os release generic machine processor byteorder little lc all none lang en us utf locale en us utf pandas pytest none pip setuptools cython none numpy scipy none pyarrow none xarray none ipython none sphinx none patsy none dateutil pytz blosc none bottleneck none tables none numexpr none feather none matplotlib none openpyxl none xlrd none xlwt none xlsxwriter none lxml none none none sqlalchemy none pymysql none none none none fastparquet none pandas gbq none pandas datareader none | 1 |
67,548 | 14,879,946,690 | IssuesEvent | 2021-01-20 08:29:58 | loggly/node-loggly-bulk | https://api.github.com/repos/loggly/node-loggly-bulk | opened | CVE-2016-10540 (High) detected in minimatch-2.0.10.tgz | security vulnerability | ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: node-loggly-bulk/package.json</p>
<p>Path to vulnerable library: node-loggly-bulk/node_modules/babel-core/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- common-style-3.1.0.tgz (Root Library)
- jscs-2.11.0.tgz
- babel-jscs-2.0.5.tgz
- babel-core-5.8.38.tgz
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/loggly/node-loggly-bulk/commits/cfd27fcc7d0cb76d62455da360cf0f9247ff6758">cfd27fcc7d0cb76d62455da360cf0f9247ff6758</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimatch","packageVersion":"2.0.10","isTransitiveDependency":true,"dependencyTree":"common-style:3.1.0;jscs:2.11.0;babel-jscs:2.0.5;babel-core:5.8.38;minimatch:2.0.10","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2016-10540","vulnerabilityDetails":"Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2016-10540 (High) detected in minimatch-2.0.10.tgz - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: node-loggly-bulk/package.json</p>
<p>Path to vulnerable library: node-loggly-bulk/node_modules/babel-core/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- common-style-3.1.0.tgz (Root Library)
- jscs-2.11.0.tgz
- babel-jscs-2.0.5.tgz
- babel-core-5.8.38.tgz
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/loggly/node-loggly-bulk/commits/cfd27fcc7d0cb76d62455da360cf0f9247ff6758">cfd27fcc7d0cb76d62455da360cf0f9247ff6758</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimatch","packageVersion":"2.0.10","isTransitiveDependency":true,"dependencyTree":"common-style:3.1.0;jscs:2.11.0;babel-jscs:2.0.5;babel-core:5.8.38;minimatch:2.0.10","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2016-10540","vulnerabilityDetails":"Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file node loggly bulk package json path to vulnerable library node loggly bulk node modules babel core node modules minimatch package json dependency hierarchy common style tgz root library jscs tgz babel jscs tgz babel core tgz x minimatch tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter vulnerabilityurl | 0 |
40,817 | 10,168,215,486 | IssuesEvent | 2019-08-07 20:14:14 | USDepartmentofLabor/OCIO-DOLSafety-iOS | https://api.github.com/repos/USDepartmentofLabor/OCIO-DOLSafety-iOS | closed | Functional - Resources Screen - Address Punctuation Needs Fixes | Fixed defect | For the second line of the address, a comma is needed after “Washington” along with a period after in the “D.C” or remove the first one.
Please see the attached screenshot.

| 1.0 | Functional - Resources Screen - Address Punctuation Needs Fixes - For the second line of the address, a comma is needed after “Washington” along with a period after in the “D.C” or remove the first one.
Please see the attached screenshot.

| non_test | functional resources screen address punctuation needs fixes for the second line of the address a comma is needed after “washington” along with a period after in the “d c” or remove the first one please see the attached screenshot | 0 |
71,401 | 15,195,018,674 | IssuesEvent | 2021-02-16 05:20:53 | githuballpractice/gameoflife | https://api.github.com/repos/githuballpractice/gameoflife | opened | CVE-2013-7285 (High) detected in xstream-1.3.1.jar | security vulnerability | ## CVE-2013-7285 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p>null</p>
<p>Path to vulnerable library: gameoflife/gameoflife-web/tools/jmeter/lib/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/githuballpractice/gameoflife/commit/7e890217b1aed93b9f13d3b1fec3443e91d74be9">7e890217b1aed93b9f13d3b1fec3443e91d74be9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Xstream API versions up to 1.4.6 and version 1.4.10, if the security framework has not been initialized, may allow a remote attacker to run arbitrary shell commands by manipulating the processed input stream when unmarshaling XML or any supported format. e.g. JSON.
<p>Publish Date: 2019-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-7285>CVE-2013-7285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7285">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7285</a></p>
<p>Release Date: 2019-05-15</p>
<p>Fix Resolution: 1.4.7,1.4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2013-7285 (High) detected in xstream-1.3.1.jar - ## CVE-2013-7285 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p>null</p>
<p>Path to vulnerable library: gameoflife/gameoflife-web/tools/jmeter/lib/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/githuballpractice/gameoflife/commit/7e890217b1aed93b9f13d3b1fec3443e91d74be9">7e890217b1aed93b9f13d3b1fec3443e91d74be9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Xstream API versions up to 1.4.6 and version 1.4.10, if the security framework has not been initialized, may allow a remote attacker to run arbitrary shell commands by manipulating the processed input stream when unmarshaling XML or any supported format. e.g. JSON.
<p>Publish Date: 2019-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-7285>CVE-2013-7285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7285">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7285</a></p>
<p>Release Date: 2019-05-15</p>
<p>Fix Resolution: 1.4.7,1.4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar null path to vulnerable library gameoflife gameoflife web tools jmeter lib xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch master vulnerability details xstream api versions up to and version if the security framework has not been initialized may allow a remote attacker to run arbitrary shell commands by manipulating the processed input stream when unmarshaling xml or any supported format e g json publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
42,097 | 12,876,631,464 | IssuesEvent | 2020-07-11 06:11:15 | whatwg/html | https://api.github.com/repos/whatwg/html | closed | Consolidate performing a security check in the spec with implementations | security/privacy | https://html.spec.whatwg.org/multipage/browsers.html#integration-with-idl
> If IsPlatformObjectSameOrigin\(platformObject\) is false, then throw a "SecurityError" DOMException\.
Blink currently doesn't implement this step in all cases, while Gecko is more strict by also applying the check to non-platform objects (correct me if I'm wrong @bzbarsky)
We should somehow figure out what to do here. | True | Consolidate performing a security check in the spec with implementations - https://html.spec.whatwg.org/multipage/browsers.html#integration-with-idl
> If IsPlatformObjectSameOrigin\(platformObject\) is false, then throw a "SecurityError" DOMException\.
Blink currently doesn't implement this step in all cases, while Gecko is more strict by also applying the check to non-platform objects (correct me if I'm wrong @bzbarsky)
We should somehow figure out what to do here. | non_test | consolidate performing a security check in the spec with implementations if isplatformobjectsameorigin platformobject is false then throw a securityerror domexception blink currently doesn t implement this step in all cases while gecko is more strict by also applying the check to non platform objects correct me if i m wrong bzbarsky we should somehow figure out what to do here | 0 |
93,081 | 8,393,579,078 | IssuesEvent | 2018-10-09 20:58:04 | Orderella/PopupDialog | https://api.github.com/repos/Orderella/PopupDialog | closed | [Carthage] Dependency "PopupDialog" has no shared framework schemes | Included in next release ready for testing | - Xcode version 10.0
- PopupDialog version 0.9.0
- Minimum deployment target (9.0):
- Language Swift:
- In case of Swift - Version 4.2
- Dependency manager carthage
- Version 0.31.0
Skipped building PopupDialog due to the error:
Dependency "PopupDialog" has no shared framework schemes | 1.0 | [Carthage] Dependency "PopupDialog" has no shared framework schemes - - Xcode version 10.0
- PopupDialog version 0.9.0
- Minimum deployment target (9.0):
- Language Swift:
- In case of Swift - Version 4.2
- Dependency manager carthage
- Version 0.31.0
Skipped building PopupDialog due to the error:
Dependency "PopupDialog" has no shared framework schemes | test | dependency popupdialog has no shared framework schemes xcode version popupdialog version minimum deployment target language swift in case of swift version dependency manager carthage version skipped building popupdialog due to the error dependency popupdialog has no shared framework schemes | 1 |
75,857 | 9,888,967,676 | IssuesEvent | 2019-06-25 12:50:08 | containous/traefik | https://api.github.com/repos/containous/traefik | closed | Documentation not clear on using Let's Encrypt with Kubernetes | area/documentation area/provider/k8s/ingress kind/enhancement priority/P3 | The documentation does not say if Let's Encrypt is supported with Kubernetes.
There is only a statement "Let's Encrypt certificates cannot be managed in Kubernets Secrets yet."
If Let's Encrypt with k8s is supported there should exist doumentation and usage examples. | 1.0 | Documentation not clear on using Let's Encrypt with Kubernetes - The documentation does not say if Let's Encrypt is supported with Kubernetes.
There is only a statement "Let's Encrypt certificates cannot be managed in Kubernets Secrets yet."
If Let's Encrypt with k8s is supported there should exist doumentation and usage examples. | non_test | documentation not clear on using let s encrypt with kubernetes the documentation does not say if let s encrypt is supported with kubernetes there is only a statement let s encrypt certificates cannot be managed in kubernets secrets yet if let s encrypt with is supported there should exist doumentation and usage examples | 0 |
224,218 | 7,467,812,365 | IssuesEvent | 2018-04-02 16:42:28 | enforcer574/smashclub | https://api.github.com/repos/enforcer574/smashclub | opened | "Delete Event" function not working | Complexity: Medium Priority: 2 - High Type: Issue | The button to delete events from the "manage events" screen doesn't appear to work. | 1.0 | "Delete Event" function not working - The button to delete events from the "manage events" screen doesn't appear to work. | non_test | delete event function not working the button to delete events from the manage events screen doesn t appear to work | 0 |
202,917 | 15,307,486,637 | IssuesEvent | 2021-02-24 20:58:37 | LD4P/qa_server | https://api.github.com/repos/LD4P/qa_server | closed | New Indexing: LOCGENRES - changes in accuracy tests | authority tests cache Indexing | The new indexing scheme has the following impact on LOCGENRES accuracy tests:
All tests are passing except one, which was failing before the indexing change. This issue is to explore and document why that one test continues to fail.

| 1.0 | New Indexing: LOCGENRES - changes in accuracy tests - The new indexing scheme has the following impact on LOCGENRES accuracy tests:
All tests are passing except one, which was failing before the indexing change. This issue is to explore and document why that one test continues to fail.

| test | new indexing locgenres changes in accuracy tests the new indexing scheme has the following impact on locgenres accuracy tests all tests are passing except one which was failing before the indexing change this issue is to explore and document why that one test continues to fail | 1 |
409,218 | 27,727,043,334 | IssuesEvent | 2023-03-15 03:41:34 | minispooner/red-team-playground | https://api.github.com/repos/minispooner/red-team-playground | opened | We need defensive monitoring | documentation enhancement help wanted | To simulate a real-world corporate network, we need defensive capabilities (monitoring, AV, EDR, alerts, etc...). Like an automated SOC that fires off alerts so that we Red Teamers can see those alerts and learn to evade, etc. We need a good open source solution that preferably can be output to a dashboard (or we'd build one) that the red team operator can watch in real time as they hack to see if they get caught. | 1.0 | We need defensive monitoring - To simulate a real-world corporate network, we need defensive capabilities (monitoring, AV, EDR, alerts, etc...). Like an automated SOC that fires off alerts so that we Red Teamers can see those alerts and learn to evade, etc. We need a good open source solution that preferably can be output to a dashboard (or we'd build one) that the red team operator can watch in real time as they hack to see if they get caught. | non_test | we need defensive monitoring to simulate a real world corporate network we need defensive capabilities monitoring av edr alerts etc like an automated soc that fires off alerts so that we red teamers can see those alerts and learn to evade etc we need a good open source solution that preferably can be output to a dashboard or we d build one that the red team operator can watch in real time as they hack to see if they get caught | 0 |
213,626 | 16,528,231,599 | IssuesEvent | 2021-05-26 23:58:16 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | tests: kernel: timer: Test timeout_abs from tests/kernel/timer/timer_api hangs causing test scenarios to fail | area: Tests bug platform: nRF priority: low | **Describe the bug**
The test timeout_abs from kernel.timer.tickless and kernel.timer scenarios at tests/kernel/timer/timer_api hangs on nrf5340dk_nrf5340_cpuappns causing this scenarios to fail (test sleep_abs is not executed)
**To Reproduce**
Steps to reproduce the behavior:
1. have nrf5340dk connected
2. go to your zephyr dir
3. run `./scripts/twister -T tests/kernel/timer/timer_api/ -p nrf5340dk_nrf5340_cpuappns --device-testing --device-serial /dev/ttyACM2 --jobs 1 -v -v --inline-logs`
4. See error
**Expected behavior**
Test passes
**Impact**
Not clear
**Logs and console output**
```
...
START - test_timer_remaining
PASS - test_timer_remaining in 0.51 seconds
===================================================================
START - test_timeout_abs
```
and hangs. Then is terminated with a timeout
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Toolchain zephyr sdk 0.12.2
- Commit SHA or Version used zephyr-v2.5.0-1463-gc59cf
**Additional context**
This is a different issue than #32839. There the test fails due to an assertion error. Here it hangs. | 1.0 | tests: kernel: timer: Test timeout_abs from tests/kernel/timer/timer_api hangs causing test scenarios to fail - **Describe the bug**
The test timeout_abs from kernel.timer.tickless and kernel.timer scenarios at tests/kernel/timer/timer_api hangs on nrf5340dk_nrf5340_cpuappns causing this scenarios to fail (test sleep_abs is not executed)
**To Reproduce**
Steps to reproduce the behavior:
1. have nrf5340dk connected
2. go to your zephyr dir
3. run `./scripts/twister -T tests/kernel/timer/timer_api/ -p nrf5340dk_nrf5340_cpuappns --device-testing --device-serial /dev/ttyACM2 --jobs 1 -v -v --inline-logs`
4. See error
**Expected behavior**
Test passes
**Impact**
Not clear
**Logs and console output**
```
...
START - test_timer_remaining
PASS - test_timer_remaining in 0.51 seconds
===================================================================
START - test_timeout_abs
```
and hangs. Then is terminated with a timeout
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Toolchain zephyr sdk 0.12.2
- Commit SHA or Version used zephyr-v2.5.0-1463-gc59cf
**Additional context**
This is a different issue than #32839. There the test fails due to an assertion error. Here it hangs. | test | tests kernel timer test timeout abs from tests kernel timer timer api hangs causing test scenarios to fail describe the bug the test timeout abs from kernel timer tickless and kernel timer scenarios at tests kernel timer timer api hangs on cpuappns causing this scenarios to fail test sleep abs is not executed to reproduce steps to reproduce the behavior have connected go to your zephyr dir run scripts twister t tests kernel timer timer api p cpuappns device testing device serial dev jobs v v inline logs see error expected behavior test passes impact not clear logs and console output start test timer remaining pass test timer remaining in seconds start test timeout abs and hangs then is terminated with a timeout environment please complete the following information os ubuntu toolchain zephyr sdk commit sha or version used zephyr additional context this is a different issue than there the test fails due to an assertion error here it hangs | 1 |
135,055 | 10,961,123,420 | IssuesEvent | 2019-11-27 14:50:39 | aces/Loris | https://api.github.com/repos/aces/Loris | closed | [My Preferences] Email validation rules differ between user account and my preferences | 22.0.0 TESTING Bug PR sent | I was able to create a user with an email address set to 'nicolasbrossard.mni(test)@gmail.com'. This address is valid in the user account module but not on the my preferences page. | 1.0 | [My Preferences] Email validation rules differ between user account and my preferences - I was able to create a user with an email address set to 'nicolasbrossard.mni(test)@gmail.com'. This address is valid in the user account module but not on the my preferences page. | test | email validation rules differ between user account and my preferences i was able to create a user with an email address set to nicolasbrossard mni test gmail com this address is valid in the user account module but not on the my preferences page | 1 |
19,425 | 5,872,815,824 | IssuesEvent | 2017-05-15 12:36:20 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | don't make intra-crate calls to exported functions go through the PLT or similar | A-codegen | Apologies for the mouthful of an issue title. Here's some example Rust for the issue at hand:
``` rust
pub struct Ex {
d: u32
}
pub fn call(x: &Ex)
{
call_with_flags(x, 0)
}
pub fn call_with_flags(x: &Ex, flags: u32)
{
println!("d: {}, flags: {}", x.d, flags)
}
```
Compiling this with `rustc -O -C inline-threshold=0` results in the following on x86-64 Linux (the use of `-C inline-threshold=0` is a bit artificial, but I've seen the same sort of issue when disassembling `std`, which presumably doesn't modify `inline-threshold`):
```
0000000000000000 <plt::call::h4fbe4633d0f48375>:
0: 31 f6 xor %esi,%esi
2: e9 00 00 00 00 jmpq 7 <plt::call::h4fbe4633d0f48375+0x7>
3: R_X86_64_PLT32 plt::call_with_flags::he191eb6ceb0f1f21-0x4
```
That `R_X86_64_PLT32` relocation is going to get resolved to a call into the PLT, which introduces a small amount of overhead on every call, in addition to taking up unneeded space with the PLT entry and the function pointer in the GOT. It would be better to use a `R_X86_64_PC32` relocation there, which will get turned into a direct jump at link time. `glibc` uses this technique to great effect so that all intra-libc calls (except for a few things like `malloc`, etc.) don't go through the PLT.
Folks might want the ability to override public functions of a crate via `LD_PRELOAD` or similar, but doing so seems a little tricky with the current name mangling scheme. Perhaps a `-C` option could be added?
| 1.0 | don't make intra-crate calls to exported functions go through the PLT or similar - Apologies for the mouthful of an issue title. Here's some example Rust for the issue at hand:
``` rust
pub struct Ex {
d: u32
}
pub fn call(x: &Ex)
{
call_with_flags(x, 0)
}
pub fn call_with_flags(x: &Ex, flags: u32)
{
println!("d: {}, flags: {}", x.d, flags)
}
```
Compiling this with `rustc -O -C inline-threshold=0` results in the following on x86-64 Linux (the use of `-C inline-threshold=0` is a bit artificial, but I've seen the same sort of issue when disassembling `std`, which presumably doesn't modify `inline-threshold`):
```
0000000000000000 <plt::call::h4fbe4633d0f48375>:
0: 31 f6 xor %esi,%esi
2: e9 00 00 00 00 jmpq 7 <plt::call::h4fbe4633d0f48375+0x7>
3: R_X86_64_PLT32 plt::call_with_flags::he191eb6ceb0f1f21-0x4
```
That `R_X86_64_PLT32` relocation is going to get resolved to a call into the PLT, which introduces a small amount of overhead on every call, in addition to taking up unneeded space with the PLT entry and the function pointer in the GOT. It would be better to use a `R_X86_64_PC32` relocation there, which will get turned into a direct jump at link time. `glibc` uses this technique to great effect so that all intra-libc calls (except for a few things like `malloc`, etc.) don't go through the PLT.
Folks might want the ability to override public functions of a crate via `LD_PRELOAD` or similar, but doing so seems a little tricky with the current name mangling scheme. Perhaps a `-C` option could be added?
| non_test | don t make intra crate calls to exported functions go through the plt or similar apologies for the mouthful of an issue title here s some example rust for the issue at hand rust pub struct ex d pub fn call x ex call with flags x pub fn call with flags x ex flags println d flags x d flags compiling this with rustc o c inline threshold results in the following on linux the use of c inline threshold is a bit artificial but i ve seen the same sort of issue when disassembling std which presumably doesn t modify inline threshold xor esi esi jmpq r plt call with flags that r relocation is going to get resolved to a call into the plt which introduces a small amount of overhead on every call in addition to taking up unneeded space with the plt entry and the function pointer in the got it would be better to use a r relocation there which will get turned into a direct jump at link time glibc uses this technique to great effect so that all intra libc calls except for a few things like malloc etc don t go through the plt folks might want the ability to override public functions of a crate via ld preload or similar but doing so seems a little tricky with the current name mangling scheme perhaps a c option could be added | 0 |
278,216 | 24,134,326,393 | IssuesEvent | 2022-09-21 09:59:04 | pingcap/tiflow | https://api.github.com/repos/pingcap/tiflow | closed | data generator in feed demo returns error causes e2e test failure | component/test type/enhancement area/engine | ### Which jobs are flaking?
engine_ghpr_integration_test
### Which test(s) are flaking?
e2e_test.go: TestSubmitTest
### Jenkins logs or GitHub Actions link
https://ci2.pingcap.net/blue/organizations/jenkins/engine_ghpr_integration_test/detail/engine_ghpr_integration_test/591/pipeline
detail logs:
[log-e2e_basic-dm_full_mode.tar.gz](https://github.com/pingcap/tiflow/files/9583326/log-e2e_basic-dm_full_mode.tar.gz)
```go
[2022-09-16T09:40:27.115Z] === RUN TestSubmitTest
[2022-09-16T09:40:27.115Z] connect demo 127.0.0.1:1234
[2022-09-16T09:40:29.008Z] e2e_test.go:91:
[2022-09-16T09:40:29.008Z] Error Trace: /home/jenkins/agent/workspace/engine_ghpr_integration_test/go/src/github.com/pingcap/tiflow/engine/test/e2e/e2e_test.go:91
[2022-09-16T09:40:29.008Z] Error: Expected nil, but got: &status.Error{s:(*status.Status)(0xc000b321e8)}
[2022-09-16T09:40:29.008Z] Test: TestSubmitTest
[2022-09-16T09:40:29.008Z] --- FAIL: TestSubmitTest (5.00s)
[2022-09-16T09:40:29.008Z] FAIL
[2022-09-16T09:40:29.008Z] FAIL github.com/pingcap/tiflow/engine/test/e2e 5.110s
[2022-09-16T09:40:29.008Z] FAIL
```
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately? | 1.0 | data generator in feed demo returns error causes e2e test failure - ### Which jobs are flaking?
engine_ghpr_integration_test
### Which test(s) are flaking?
e2e_test.go: TestSubmitTest
### Jenkins logs or GitHub Actions link
https://ci2.pingcap.net/blue/organizations/jenkins/engine_ghpr_integration_test/detail/engine_ghpr_integration_test/591/pipeline
detail logs:
[log-e2e_basic-dm_full_mode.tar.gz](https://github.com/pingcap/tiflow/files/9583326/log-e2e_basic-dm_full_mode.tar.gz)
```go
[2022-09-16T09:40:27.115Z] === RUN TestSubmitTest
[2022-09-16T09:40:27.115Z] connect demo 127.0.0.1:1234
[2022-09-16T09:40:29.008Z] e2e_test.go:91:
[2022-09-16T09:40:29.008Z] Error Trace: /home/jenkins/agent/workspace/engine_ghpr_integration_test/go/src/github.com/pingcap/tiflow/engine/test/e2e/e2e_test.go:91
[2022-09-16T09:40:29.008Z] Error: Expected nil, but got: &status.Error{s:(*status.Status)(0xc000b321e8)}
[2022-09-16T09:40:29.008Z] Test: TestSubmitTest
[2022-09-16T09:40:29.008Z] --- FAIL: TestSubmitTest (5.00s)
[2022-09-16T09:40:29.008Z] FAIL
[2022-09-16T09:40:29.008Z] FAIL github.com/pingcap/tiflow/engine/test/e2e 5.110s
[2022-09-16T09:40:29.008Z] FAIL
```
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately? | test | data generator in feed demo returns error causes test failure which jobs are flaking engine ghpr integration test which test s are flaking test go testsubmittest jenkins logs or github actions link detail logs go run testsubmittest connect demo test go error trace home jenkins agent workspace engine ghpr integration test go src github com pingcap tiflow engine test test go error expected nil but got status error s status status test testsubmittest fail testsubmittest fail fail github com pingcap tiflow engine test fail anything else we need to know does this test exist for other branches as well has there been a high frequency of failure lately | 1 |
340,557 | 30,526,196,409 | IssuesEvent | 2023-07-19 11:26:57 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | New `conda-lock=2.1.0` fails to build lockfile due to old conda/mamba | testing | Lock file generation [fails](https://github.com/ESMValGroup/ESMValCore/actions/runs/5515733196/jobs/10067110192) with the latest version of conda-lock. ~I will pin it to 2.0 since that worked well, while I figure out what's happening.~
This is due to an old conda 23.1.0 and mamba 1.4.2 | 1.0 | New `conda-lock=2.1.0` fails to build lockfile due to old conda/mamba - Lock file generation [fails](https://github.com/ESMValGroup/ESMValCore/actions/runs/5515733196/jobs/10067110192) with the latest version of conda-lock. ~I will pin it to 2.0 since that worked well, while I figure out what's happening.~
This is due to an old conda 23.1.0 and mamba 1.4.2 | test | new conda lock fails to build lockfile due to old conda mamba lock file generation with the latest version of conda lock i will pin it to since that worked well while i figure out what s happening this is due to an old conda and mamba | 1 |
273,599 | 23,769,238,069 | IssuesEvent | 2022-09-01 15:01:06 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Jest Tests.x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks - useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value | failed-test Team:Threat Hunting | A test failed on a tracked branch
```
Error: expect(received).toMatchInlineSnapshot(snapshot)
Snapshot name: `useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value 1`
- Snapshot - 1
+ Received + 1
@@ -3,11 +3,11 @@
"displayAsText": "@timestamp",
"id": "@timestamp",
},
Object {
"displayAsText": "Indicator",
- "id": "display_name",
+ "id": "threat.indicator.name",
},
Object {
"displayAsText": "Indicator type",
"id": "threat.indicator.type",
},
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks/use_column_settings.test.ts:23:40)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:276:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:216:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:40)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
at Object.worker (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/testWorker.js:133:12)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/20447#0182f93e-c4ae-461d-b3d5-149ab05d0565)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks","test.name":"useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value","test.failCount":6}} --> | 1.0 | Failing test: Jest Tests.x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks - useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value - A test failed on a tracked branch
```
Error: expect(received).toMatchInlineSnapshot(snapshot)
Snapshot name: `useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value 1`
- Snapshot - 1
+ Received + 1
@@ -3,11 +3,11 @@
"displayAsText": "@timestamp",
"id": "@timestamp",
},
Object {
"displayAsText": "Indicator",
- "id": "display_name",
+ "id": "threat.indicator.name",
},
Object {
"displayAsText": "Indicator type",
"id": "threat.indicator.type",
},
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks/use_column_settings.test.ts:23:40)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:276:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:216:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:40)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
at Object.worker (/var/lib/buildkite-agent/builds/kb-n2-4-spot-73da4e1e3afd50ff/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/testWorker.js:133:12)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/20447#0182f93e-c4ae-461d-b3d5-149ab05d0565)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/threat_intelligence/public/modules/indicators/components/indicators_table/hooks","test.name":"useColumnSettings() initial state when initial state is not persisted into plugin storage service should return correct value","test.failCount":6}} --> | test | failing test jest tests x pack plugins threat intelligence public modules indicators components indicators table hooks usecolumnsettings initial state when initial state is not persisted into plugin storage service should return correct value a test failed on a tracked branch error expect received tomatchinlinesnapshot snapshot snapshot name usecolumnsettings initial state when initial state is not persisted into plugin storage service should return correct value snapshot received displayastext timestamp id timestamp object displayastext indicator id display name id threat indicator name object displayastext indicator type id threat indicator type at object var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins threat intelligence public modules indicators components indicators table hooks use column settings test ts at promise then completed var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at new promise at callasynccircusfn var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at callcircustest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at processticksandrejections node internal process task queues at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at run var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runandtransformresultstojestformat var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at object worker var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build testworker js first failure | 1 |
2,371 | 2,596,527,666 | IssuesEvent | 2015-02-20 21:20:16 | kripken/emscripten | https://api.github.com/repos/kripken/emscripten | closed | Running tests leaks temp files and directories. | tests | I added the Windows bot to report the number of items in the temp folder before and after the test run.
Looking at the recent runs of win-emcc-incoming-tests at http://clb.demon.fi:8112/waterfall shows the temp folder is still growing. From the three most recent Windows runs that all succeeded all the tests (so I think there shouldn't be a leak caused by an exception):
Note: Before starting test run, temp folder G:\tmp contains this many files: 582
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 859
Note: Before starting test run, temp folder G:\tmp contains this many files: 1136
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 1413
Note: Before starting test run, temp folder G:\tmp contains this many files: 1413
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 1690
At each run, the delta is exactly 277 items. The difference between 859 and 1136 is when the bot did a run on the master branch, and that delta is too exactly 277 items.
The current contents of the temp folder looks like this:
https://dl.dropbox.com/u/40949268/emcc/bugs/temp_directory_files.txt
| 1.0 | Running tests leaks temp files and directories. - I added the Windows bot to report the number of items in the temp folder before and after the test run.
Looking at the recent runs of win-emcc-incoming-tests at http://clb.demon.fi:8112/waterfall shows the temp folder is still growing. From the three most recent Windows runs that all succeeded all the tests (so I think there shouldn't be a leak caused by an exception):
Note: Before starting test run, temp folder G:\tmp contains this many files: 582
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 859
Note: Before starting test run, temp folder G:\tmp contains this many files: 1136
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 1413
Note: Before starting test run, temp folder G:\tmp contains this many files: 1413
-- run tests --
Note: After test run finished, temp folder G:\tmp contains this many files: 1690
At each run, the delta is exactly 277 items. The difference between 859 and 1136 is when the bot did a run on the master branch, and that delta is too exactly 277 items.
The current contents of the temp folder looks like this:
https://dl.dropbox.com/u/40949268/emcc/bugs/temp_directory_files.txt
| test | running tests leaks temp files and directories i added the windows bot to report the number of items in the temp folder before and after the test run looking at the recent runs of win emcc incoming tests at shows the temp folder is still growing from the three most recent windows runs that all succeeded all the tests so i think there shouldn t be a leak caused by an exception note before starting test run temp folder g tmp contains this many files run tests note after test run finished temp folder g tmp contains this many files note before starting test run temp folder g tmp contains this many files run tests note after test run finished temp folder g tmp contains this many files note before starting test run temp folder g tmp contains this many files run tests note after test run finished temp folder g tmp contains this many files at each run the delta is exactly items the difference between and is when the bot did a run on the master branch and that delta is too exactly items the current contents of the temp folder looks like this | 1 |
278,636 | 24,165,049,591 | IssuesEvent | 2022-09-22 14:27:41 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/service/lib - 404s from proxies requests when a proxy returns Not Found with an incorrect product header returns an EsUnavailable error on `resolve` requests with a 404 proxy response and wrong product header for an exact match | Team:Core failed-test | A test failed on a tracked branch
```
Error: Unable to read snapshot manifest: Internal Server Error
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Details>AFfi+BH/B7G+eZssnBoxPcpES47wUGb4uG1PgxYx6G7J4N3LUV7NMYEMgk7tvX0VKGN/1H8BiXdaH5ztuyc/LnnVr2Be9nKBC1Dzjn7BLvUIgnFag6GIRVRHaKcfzp5oIPXrDKBn/H+W</Details></Error>
at getArtifactSpecForSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/artifact.js:124:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Function.getSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/artifact.js:160:26)
at downloadSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/install/install_snapshot.js:45:20)
at installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/install/install_snapshot.js:72:7)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/cluster.js:159:11
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/tooling-log/target_node/src/tooling_log.js:75:18
at Cluster.installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/cluster.js:156:12)
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/test/target_node/src/es/test_es_cluster.js:109:24)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/src/core/test_helpers/kbn_server.ts:248:7)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/service/lib/repository_with_proxy.test.ts:80:16
at _callCircusHook (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:175:5)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:45:5)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/21257#018364fd-1dbe-424d-a6eb-0782e30d150c)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/service/lib","test.name":"404s from proxies requests when a proxy returns Not Found with an incorrect product header returns an EsUnavailable error on `resolve` requests with a 404 proxy response and wrong product header for an exact match","test.failCount":1}} --> | 1.0 | Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/service/lib - 404s from proxies requests when a proxy returns Not Found with an incorrect product header returns an EsUnavailable error on `resolve` requests with a 404 proxy response and wrong product header for an exact match - A test failed on a tracked branch
```
Error: Unable to read snapshot manifest: Internal Server Error
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Details>AFfi+BH/B7G+eZssnBoxPcpES47wUGb4uG1PgxYx6G7J4N3LUV7NMYEMgk7tvX0VKGN/1H8BiXdaH5ztuyc/LnnVr2Be9nKBC1Dzjn7BLvUIgnFag6GIRVRHaKcfzp5oIPXrDKBn/H+W</Details></Error>
at getArtifactSpecForSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/artifact.js:124:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Function.getSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/artifact.js:160:26)
at downloadSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/install/install_snapshot.js:45:20)
at installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/install/install_snapshot.js:72:7)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/cluster.js:159:11
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/tooling-log/target_node/src/tooling_log.js:75:18
at Cluster.installSnapshot (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/es/target_node/src/cluster.js:156:12)
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/@kbn/test/target_node/src/es/test_es_cluster.js:109:24)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/src/core/test_helpers/kbn_server.ts:248:7)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/service/lib/repository_with_proxy.test.ts:80:16
at _callCircusHook (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:175:5)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:45:5)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-4db4b9efbb58ae6a/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/21257#018364fd-1dbe-424d-a6eb-0782e30d150c)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/service/lib","test.name":"404s from proxies requests when a proxy returns Not Found with an incorrect product header returns an EsUnavailable error on `resolve` requests with a 404 proxy response and wrong product header for an exact match","test.failCount":1}} --> | test | failing test jest integration tests src core server integration tests saved objects service lib from proxies requests when a proxy returns not found with an incorrect product header returns an esunavailable error on resolve requests with a proxy response and wrong product header for an exact match a test failed on a tracked branch error unable to read snapshot manifest internal server error internalerror we encountered an internal error please try again affi bh h w at getartifactspecforsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src artifact js at processticksandrejections node internal process task queues at function getsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src artifact js at downloadsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src install install snapshot js at installsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src install install snapshot js at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src cluster js at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn tooling log target node src tooling log js at cluster installsnapshot var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn es target node src cluster js at testcluster start var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn test target node src es test es cluster js at startes var lib buildkite agent builds kb spot elastic kibana on merge kibana src core test helpers kbn server ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects service lib repository with proxy test ts at callcircushook var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at run var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runandtransformresultstojestformat var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure | 1 |
184,448 | 6,713,339,941 | IssuesEvent | 2017-10-13 13:07:45 | envistaInteractive/itagroup-ecommerce-template | https://api.github.com/repos/envistaInteractive/itagroup-ecommerce-template | opened | Checkout: tax | High Priority JavaScript Page Layout | If {{websiteSettings.tax.taxExempt}} is false then the website will need to collect tax information during the checkout process.
Currently the tax information form is on a separate template: checkout/tax.liquid, but we will move it to be in shipping.liquid and use DOM manipulation to change the breadcrumb to tax information breadcrumb once it is in that state (which happens after a user has entered all shipping and payment info and pressed next).
In the case where there is tax, after the shipping and payments information is entered, instead of pressing the submit button the user will click next. Once tax information has been entered, the submit order button will appear and then the order will be placed and user will be sent to the checkout/confirmation page.
endpoint: `/api/profile/addresses`
payload:
```
{
"contactInfo":{
"name":"Jason Futch"
},
"line1":"660 Grand Reserve Dr",
"city":"Suwanee",
"state":"GA",
"zip":"30024",
"country":"USA",
"defaultShipping":true,
"defaultBilling":true
}
``` | 1.0 | Checkout: tax - If {{websiteSettings.tax.taxExempt}} is false then the website will need to collect tax information during the checkout process.
Currently the tax information form is on a separate template: checkout/tax.liquid, but we will move it to be in shipping.liquid and use DOM manipulation to change the breadcrumb to tax information breadcrumb once it is in that state (which happens after a user has entered all shipping and payment info and pressed next).
In the case where there is tax, after the shipping and payments information is entered, instead of pressing the submit button the user will click next. Once tax information has been entered, the submit order button will appear and then the order will be placed and user will be sent to the checkout/confirmation page.
endpoint: `/api/profile/addresses`
payload:
```
{
"contactInfo":{
"name":"Jason Futch"
},
"line1":"660 Grand Reserve Dr",
"city":"Suwanee",
"state":"GA",
"zip":"30024",
"country":"USA",
"defaultShipping":true,
"defaultBilling":true
}
``` | non_test | checkout tax if websitesettings tax taxexempt is false then the website will need to collect tax information during the checkout process currently the tax information form is on a separate template checkout tax liquid but we will move it to be in shipping liquid and use dom manipulation to change the breadcrumb to tax information breadcrumb once it is in that state which happens after a user has entered all shipping and payment info and pressed next in the case where there is tax after the shipping and payments information is entered instead of pressing the submit button the user will click next once tax information has been entered the submit order button will appear and then the order will be placed and user will be sent to the checkout confirmation page endpoint api profile addresses payload contactinfo name jason futch grand reserve dr city suwanee state ga zip country usa defaultshipping true defaultbilling true | 0 |
293,994 | 25,338,666,569 | IssuesEvent | 2022-11-18 19:14:40 | mvanzulli/Apolo.jl | https://api.github.com/repos/mvanzulli/Apolo.jl | closed | Split `runtests.jl` into unitary and end to end | enhancement tests | Split tests into features for each interface:
- Unitary: Single methods and features
- End to end: Model to known results | 1.0 | Split `runtests.jl` into unitary and end to end - Split tests into features for each interface:
- Unitary: Single methods and features
- End to end: Model to known results | test | split runtests jl into unitary and end to end split tests into features for each interface unitary single methods and features end to end model to known results | 1 |
186,144 | 21,920,056,294 | IssuesEvent | 2022-05-22 12:38:00 | turkdevops/sourcegraph | https://api.github.com/repos/turkdevops/sourcegraph | closed | WS-2021-0153 (High) detected in ejs-2.7.4.tgz - autoclosed | security vulnerability | ## WS-2021-0153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.7.4.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz">https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz</a></p>
<p>
Dependency Hierarchy:
- core-5.3.18.tgz (Root Library)
- :x: **ejs-2.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/sourcegraph/commit/5a4a7def9ddff6354e22069c494feb0f30196e36">5a4a7def9ddff6354e22069c494feb0f30196e36</a></p>
<p>Found in base branch: <b>dev/seed-tool</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Injection vulnerability was found in ejs before 3.1.6. Caused by filename which isn't sanitized for display.
<p>Publish Date: 2021-01-22
<p>URL: <a href=https://github.com/mde/ejs/commit/abaee2be937236b1b8da9a1f55096c17dda905fd>WS-2021-0153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mde/ejs/issues/571">https://github.com/mde/ejs/issues/571</a></p>
<p>Release Date: 2021-01-22</p>
<p>Fix Resolution (ejs): 3.1.6</p>
<p>Direct dependency fix Resolution (@storybook/core): 6.4.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0153 (High) detected in ejs-2.7.4.tgz - autoclosed - ## WS-2021-0153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.7.4.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz">https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz</a></p>
<p>
Dependency Hierarchy:
- core-5.3.18.tgz (Root Library)
- :x: **ejs-2.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/sourcegraph/commit/5a4a7def9ddff6354e22069c494feb0f30196e36">5a4a7def9ddff6354e22069c494feb0f30196e36</a></p>
<p>Found in base branch: <b>dev/seed-tool</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Injection vulnerability was found in ejs before 3.1.6. Caused by filename which isn't sanitized for display.
<p>Publish Date: 2021-01-22
<p>URL: <a href=https://github.com/mde/ejs/commit/abaee2be937236b1b8da9a1f55096c17dda905fd>WS-2021-0153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mde/ejs/issues/571">https://github.com/mde/ejs/issues/571</a></p>
<p>Release Date: 2021-01-22</p>
<p>Fix Resolution (ejs): 3.1.6</p>
<p>Direct dependency fix Resolution (@storybook/core): 6.4.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in ejs tgz autoclosed ws high severity vulnerability vulnerable library ejs tgz embedded javascript templates library home page a href dependency hierarchy core tgz root library x ejs tgz vulnerable library found in head commit a href found in base branch dev seed tool vulnerability details arbitrary code injection vulnerability was found in ejs before caused by filename which isn t sanitized for display publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ejs direct dependency fix resolution storybook core step up your open source security game with whitesource | 0 |
256,760 | 22,096,724,126 | IssuesEvent | 2022-06-01 10:43:59 | ubtue/tuefind | https://api.github.com/repos/ubtue/tuefind | closed | DOI wird nicht (nicht richtig) in Zotero und Citavi exportiert | System: RelBib ready for testing | Wenn man einen Datensatz aus RelBib im RIS-Format exortiert, wird der DOI in Zotero überhaupt nicht übernommen und in Citavi nicht im vorgesehenen Feld abgelegt.
Hier ein Beispiel:
Zotero

Citavi

In der Exportdatei ist der DOI allerdings enthalten:

Ein altbekanntes Problem? Vermutlich nicht auf unserer Seite zu lösen? | 1.0 | DOI wird nicht (nicht richtig) in Zotero und Citavi exportiert - Wenn man einen Datensatz aus RelBib im RIS-Format exortiert, wird der DOI in Zotero überhaupt nicht übernommen und in Citavi nicht im vorgesehenen Feld abgelegt.
Hier ein Beispiel:
Zotero

Citavi

In der Exportdatei ist der DOI allerdings enthalten:

Ein altbekanntes Problem? Vermutlich nicht auf unserer Seite zu lösen? | test | doi wird nicht nicht richtig in zotero und citavi exportiert wenn man einen datensatz aus relbib im ris format exortiert wird der doi in zotero überhaupt nicht übernommen und in citavi nicht im vorgesehenen feld abgelegt hier ein beispiel zotero citavi in der exportdatei ist der doi allerdings enthalten ein altbekanntes problem vermutlich nicht auf unserer seite zu lösen | 1 |
216,851 | 16,820,583,977 | IssuesEvent | 2021-06-17 12:42:04 | Tencent/bk-ci | https://api.github.com/repos/Tencent/bk-ci | closed | 人工审核插件审核结果文案优化 | area/ci/frontend kind/plugins stage/test stage/uat test/passed uat/passed | ### 现象:
人工审核插件 - 用户可能不清楚选择“同意”或“驳回”后,流水线会如何行动

### 期望:
“同意” 更改为:“同意,继续执行流水线” / “Approve, and continue pipeline execution.”
“驳回” 更改为:“驳回,该步骤判定为失败” / “Reject, and set current task status failed.” | 2.0 | 人工审核插件审核结果文案优化 - ### 现象:
人工审核插件 - 用户可能不清楚选择“同意”或“驳回”后,流水线会如何行动

### 期望:
“同意” 更改为:“同意,继续执行流水线” / “Approve, and continue pipeline execution.”
“驳回” 更改为:“驳回,该步骤判定为失败” / “Reject, and set current task status failed.” | test | 人工审核插件审核结果文案优化 现象: 人工审核插件 用户可能不清楚选择“同意”或“驳回”后,流水线会如何行动 期望: “同意” 更改为:“同意,继续执行流水线” “approve and continue pipeline execution ” “驳回” 更改为:“驳回,该步骤判定为失败” “reject and set current task status failed ” | 1 |
131,501 | 18,292,947,819 | IssuesEvent | 2021-10-05 17:11:47 | rightoneducation/righton-app | https://api.github.com/repos/rightoneducation/righton-app | closed | [Mobile] Add animation for voting page | mobile-app design | Add an animation to show voting page to show the updates from chosen answers by teams | 1.0 | [Mobile] Add animation for voting page - Add an animation to show voting page to show the updates from chosen answers by teams | non_test | add animation for voting page add an animation to show voting page to show the updates from chosen answers by teams | 0 |
50,803 | 12,560,632,437 | IssuesEvent | 2020-06-07 22:49:19 | CleverRaven/Cataclysm-DDA | https://api.github.com/repos/CleverRaven/Cataclysm-DDA | closed | some compilations warnings and black screen on armhf rpi4 | Code: Build [C++] stale | Hi, so I've compiled master (it was a request from one friend of mine to test it) and it only compiles
dissabling warnings, something about -Werror=limits" , so ,disabling warnings it compiles.
but in game, ive start a game and I can see nothing more than the menu interface,minimaps, console, etc..no warning whatsoever
we have 2.1 GL desktop profile.
```
pi@raspberrypi:~/Desktop/Cataclysm-DDA/build $ cmake ../
* Cataclysm: Dark Days Ahead is a roguelike set in a post-apocalyptic world.
_________ __ .__
\_ ___ \ _____ _/ |_ _____ ____ | | ___.__ ______ _____
/ \ \/ \__ \ \ __\\__ \ _/ ___\ | | < | | / ___/ / \
\ \____ / __ \_ | | / __ \_\ \___ | |__ \___ | \___ \ | Y Y \
\______ /(____ / |__| (____ / \___ >|____/ / ____|/____ >|__|_| /
\/ \/ \/ \/ \/ \/ \/
--= Dark Days Ahead =--
* https://cataclysmdda.org/
-- build environment --
-- Build realm is : Linux armv7l
-- CataclysmDDA build version is : 0.D-12997-gdfd736e7ea
-- CataclysmDDA build options --
-- CMAKE_INSTALL_PREFIX : /usr/local
-- BIN_PREFIX : /usr/local/bin
-- DATA_PREFIX : /usr/local/share/cataclysm-dda
-- LOCALE_PATH : /usr/local/share/locale
-- DESKTOP_ENTRY_PATH : /usr/local/share/applications
-- PIXMAPS_ENTRY_PATH : /usr/local/share/icons/hicolor
-- PIXMAPS_UNITY_ENTRY_PATH : /usr/local/share/icons/ubuntu-mono-dark
-- MANPAGE_ENTRY_PATH : /usr/local/share/man
-- GIT_BINARY : /usr/bin/git
-- DYNAMIC_LINKING : ON
-- TILES : ON
-- CURSES : OFF
-- SOUND : ON
-- BACKTRACE : ON
-- LOCALIZE : ON
-- USE_HOME_DIR : ON
-- LANGUAGES : de;es_AR;es_ES;fr;it_IT;ja;ko;pt_BR;ru;zh_CN;zh_TW
-- See INSTALL file for details and more info --
-- Searching for SDL2 library --
-- Searching for SDL2_TTF library --
-- Searching for SDL2_image library --
-- Searching for SDL2_mixer library --
-- Process LANGUAGES variable --
-- Add translation for de: de.po
-- Add translation for es_AR: es_AR.po
-- Add translation for es_ES: es_ES.po
-- Add translation for fr: fr.po
-- Add translation for it_IT: it_IT.po
-- Add translation for ja: ja.po
-- Add translation for ko: ko.po
-- Add translation for pt_BR: pt_BR.po
-- Add translation for ru: ru.po
-- Add translation for zh_CN: zh_CN.po
-- Add translation for zh_TW: zh_TW.po
-- Configuring done
-- Generating done
-- Build files have been written to: /home/pi/Desktop/Cataclysm-DDA/build
```

| 1.0 | some compilations warnings and black screen on armhf rpi4 - Hi, so I've compiled master (it was a request from one friend of mine to test it) and it only compiles
dissabling warnings, something about -Werror=limits" , so ,disabling warnings it compiles.
but in game, ive start a game and I can see nothing more than the menu interface,minimaps, console, etc..no warning whatsoever
we have 2.1 GL desktop profile.
```
pi@raspberrypi:~/Desktop/Cataclysm-DDA/build $ cmake ../
* Cataclysm: Dark Days Ahead is a roguelike set in a post-apocalyptic world.
_________ __ .__
\_ ___ \ _____ _/ |_ _____ ____ | | ___.__ ______ _____
/ \ \/ \__ \ \ __\\__ \ _/ ___\ | | < | | / ___/ / \
\ \____ / __ \_ | | / __ \_\ \___ | |__ \___ | \___ \ | Y Y \
\______ /(____ / |__| (____ / \___ >|____/ / ____|/____ >|__|_| /
\/ \/ \/ \/ \/ \/ \/
--= Dark Days Ahead =--
* https://cataclysmdda.org/
-- build environment --
-- Build realm is : Linux armv7l
-- CataclysmDDA build version is : 0.D-12997-gdfd736e7ea
-- CataclysmDDA build options --
-- CMAKE_INSTALL_PREFIX : /usr/local
-- BIN_PREFIX : /usr/local/bin
-- DATA_PREFIX : /usr/local/share/cataclysm-dda
-- LOCALE_PATH : /usr/local/share/locale
-- DESKTOP_ENTRY_PATH : /usr/local/share/applications
-- PIXMAPS_ENTRY_PATH : /usr/local/share/icons/hicolor
-- PIXMAPS_UNITY_ENTRY_PATH : /usr/local/share/icons/ubuntu-mono-dark
-- MANPAGE_ENTRY_PATH : /usr/local/share/man
-- GIT_BINARY : /usr/bin/git
-- DYNAMIC_LINKING : ON
-- TILES : ON
-- CURSES : OFF
-- SOUND : ON
-- BACKTRACE : ON
-- LOCALIZE : ON
-- USE_HOME_DIR : ON
-- LANGUAGES : de;es_AR;es_ES;fr;it_IT;ja;ko;pt_BR;ru;zh_CN;zh_TW
-- See INSTALL file for details and more info --
-- Searching for SDL2 library --
-- Searching for SDL2_TTF library --
-- Searching for SDL2_image library --
-- Searching for SDL2_mixer library --
-- Process LANGUAGES variable --
-- Add translation for de: de.po
-- Add translation for es_AR: es_AR.po
-- Add translation for es_ES: es_ES.po
-- Add translation for fr: fr.po
-- Add translation for it_IT: it_IT.po
-- Add translation for ja: ja.po
-- Add translation for ko: ko.po
-- Add translation for pt_BR: pt_BR.po
-- Add translation for ru: ru.po
-- Add translation for zh_CN: zh_CN.po
-- Add translation for zh_TW: zh_TW.po
-- Configuring done
-- Generating done
-- Build files have been written to: /home/pi/Desktop/Cataclysm-DDA/build
```

| non_test | some compilations warnings and black screen on armhf hi so i ve compiled master it was a request from one friend of mine to test it and it only compiles dissabling warnings something about werror limits so disabling warnings it compiles but in game ive start a game and i can see nothing more than the menu interface minimaps console etc no warning whatsoever we have gl desktop profile pi raspberrypi desktop cataclysm dda build cmake cataclysm dark days ahead is a roguelike set in a post apocalyptic world y y dark days ahead build environment build realm is linux cataclysmdda build version is d cataclysmdda build options cmake install prefix usr local bin prefix usr local bin data prefix usr local share cataclysm dda locale path usr local share locale desktop entry path usr local share applications pixmaps entry path usr local share icons hicolor pixmaps unity entry path usr local share icons ubuntu mono dark manpage entry path usr local share man git binary usr bin git dynamic linking on tiles on curses off sound on backtrace on localize on use home dir on languages de es ar es es fr it it ja ko pt br ru zh cn zh tw see install file for details and more info searching for library searching for ttf library searching for image library searching for mixer library process languages variable add translation for de de po add translation for es ar es ar po add translation for es es es es po add translation for fr fr po add translation for it it it it po add translation for ja ja po add translation for ko ko po add translation for pt br pt br po add translation for ru ru po add translation for zh cn zh cn po add translation for zh tw zh tw po configuring done generating done build files have been written to home pi desktop cataclysm dda build | 0 |
143,152 | 11,518,314,387 | IssuesEvent | 2020-02-14 10:15:35 | timgatzky/pct_tabletree_widget | https://api.github.com/repos/timgatzky/pct_tabletree_widget | closed | Keine Auswahl bei Tags in tl_member | testing | Ich habe einen Custom Catalog für die tl_member angelegt um diese zu erweitern. Das klappt auch problemlos. Außer wenn ich ein Tags-Attribut verwende.
Wenn ich im Frontend die entsprechenden Attribute auswähle und "Anwenden" klicke, wird im eigentlichen DIV, wo die ausgewählten Ergebnisse angezeigt werden sollten, die komplette Seite geladen - und in der Seite wird im Bereich des Custom Catalog Readers nur "tl_member" ausgegeben.

| 1.0 | Keine Auswahl bei Tags in tl_member - Ich habe einen Custom Catalog für die tl_member angelegt um diese zu erweitern. Das klappt auch problemlos. Außer wenn ich ein Tags-Attribut verwende.
Wenn ich im Frontend die entsprechenden Attribute auswähle und "Anwenden" klicke, wird im eigentlichen DIV, wo die ausgewählten Ergebnisse angezeigt werden sollten, die komplette Seite geladen - und in der Seite wird im Bereich des Custom Catalog Readers nur "tl_member" ausgegeben.

| test | keine auswahl bei tags in tl member ich habe einen custom catalog für die tl member angelegt um diese zu erweitern das klappt auch problemlos außer wenn ich ein tags attribut verwende wenn ich im frontend die entsprechenden attribute auswähle und anwenden klicke wird im eigentlichen div wo die ausgewählten ergebnisse angezeigt werden sollten die komplette seite geladen und in der seite wird im bereich des custom catalog readers nur tl member ausgegeben | 1 |
234,443 | 19,182,543,353 | IssuesEvent | 2021-12-04 16:59:33 | RPTools/maptool | https://api.github.com/repos/RPTools/maptool | closed | [Bug]: Token image problems | bug tested | ### Describe the Bug
If you attempt to drag and drop multiple tokens to a map, all but one of them fails out into Red X's. In addition, after that, all further tokens will fail out into Red X's until you reload the program (_not_ the campaign--the full program).
### To Reproduce
1, Open Maptool and go to a map.
2. Find two or more tokens in a folder elsewhere, and drop them on the map.
3. All but one of the tokens should display as a red X. All additional tokens placed after this will also display as red Xs.
4. If the program (not just the campaign) is reloaded after saving the campaign, the tokens will display correctly, as will all additional ones applies one at a time.
### Expected Behaviour
Multiple tokens dragged and dropped should display in a stack, and look normally once separated.
### Screenshots
_No response_
### MapTool Info
Maptool v. 1.11.0
### Desktop
Windows 10 Home, Build 19043.1348
### Additional Context
_No response_ | 1.0 | [Bug]: Token image problems - ### Describe the Bug
If you attempt to drag and drop multiple tokens to a map, all but one of them fails out into Red X's. In addition, after that, all further tokens will fail out into Red X's until you reload the program (_not_ the campaign--the full program).
### To Reproduce
1, Open Maptool and go to a map.
2. Find two or more tokens in a folder elsewhere, and drop them on the map.
3. All but one of the tokens should display as a red X. All additional tokens placed after this will also display as red Xs.
4. If the program (not just the campaign) is reloaded after saving the campaign, the tokens will display correctly, as will all additional ones applies one at a time.
### Expected Behaviour
Multiple tokens dragged and dropped should display in a stack, and look normally once separated.
### Screenshots
_No response_
### MapTool Info
Maptool v. 1.11.0
### Desktop
Windows 10 Home, Build 19043.1348
### Additional Context
_No response_ | test | token image problems describe the bug if you attempt to drag and drop multiple tokens to a map all but one of them fails out into red x s in addition after that all further tokens will fail out into red x s until you reload the program not the campaign the full program to reproduce open maptool and go to a map find two or more tokens in a folder elsewhere and drop them on the map all but one of the tokens should display as a red x all additional tokens placed after this will also display as red xs if the program not just the campaign is reloaded after saving the campaign the tokens will display correctly as will all additional ones applies one at a time expected behaviour multiple tokens dragged and dropped should display in a stack and look normally once separated screenshots no response maptool info maptool v desktop windows home build additional context no response | 1 |
291,384 | 8,924,188,930 | IssuesEvent | 2019-01-21 17:46:46 | Nevatrip/cart | https://api.github.com/repos/Nevatrip/cart | opened | Response for `getCart( sessionID )` | API Priority: High Status: Proposal Type: Maintenance | ### Example for mock-server
**request**: [/mock/:userID-or-sessionID/cart](//example.com/mock/:userID-or-sessionID/cart)
**response**:
```js
{
user: {
id: "8a0sdf3g8wr70g2qw348",
fullname: "Ivan Ivanov",
email: "example@email.com",
phone: "+79876543210",
},
promo: "code",
items: [
{
id,
title,
}
]
}
``` | 1.0 | Response for `getCart( sessionID )` - ### Example for mock-server
**request**: [/mock/:userID-or-sessionID/cart](//example.com/mock/:userID-or-sessionID/cart)
**response**:
```js
{
user: {
id: "8a0sdf3g8wr70g2qw348",
fullname: "Ivan Ivanov",
email: "example@email.com",
phone: "+79876543210",
},
promo: "code",
items: [
{
id,
title,
}
]
}
``` | non_test | response for getcart sessionid example for mock server request example com mock userid or sessionid cart response js user id fullname ivan ivanov email example email com phone promo code items id title | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.