Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
82,890 | 7,855,493,691 | IssuesEvent | 2018-06-21 02:09:31 | Wohlstand/libADLMIDI | https://api.github.com/repos/Wohlstand/libADLMIDI | closed | Selection of "bend coefficient" value | Need a test Need an analyze | How to fix this piece of code below?
Is this variation of frequency needed, and then should this coefficient receive a dynamic value according to the emulator choice?
https://github.com/Wohlstand/libADLMIDI/blob/6586740caacfb17210a26e9618c31c54ec0da703/src/adlmidi_midiplay.cpp#L1689-L1695 | 1.0 | Selection of "bend coefficient" value - How to fix this piece of code below?
Is this variation of frequency needed, and then should this coefficient receive a dynamic value according to the emulator choice?
https://github.com/Wohlstand/libADLMIDI/blob/6586740caacfb17210a26e9618c31c54ec0da703/src/adlmidi_midiplay.cpp#L1689-L1695 | test | selection of bend coefficient value how to fix this piece of code below is this variation of frequency needed and then should this coefficient receive a dynamic value according to the emulator choice | 1 |
518,985 | 15,038,341,261 | IssuesEvent | 2021-02-02 17:20:04 | otasoft/microservice-template | https://api.github.com/repos/otasoft/microservice-template | closed | refactor utils module | enhancement high priority | ## Feature Request
## Is your feature request related to a problem? Please describe.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
- remove RpcExceptionService -> I have not seen anyone using solution similar to that. And it is also not efficient if your microservice does not use all provided exceptions (in that case we are implementing methods that won't be used in this project)
- move validation service to database module and rename it to something similiar to database naming convention (as this service validates only database errors)
- move mocks higher away from utils module
- remove utils module
## Teachability, Documentation, Adoption, Migration Strategy
<!-- If you can, explain how users will be able to use this and possibly write out a version the docs. Maybe a screenshot or design? -->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. --> | 1.0 | refactor utils module - ## Feature Request
## Is your feature request related to a problem? Please describe.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
- remove RpcExceptionService -> I have not seen anyone using solution similar to that. And it is also not efficient if your microservice does not use all provided exceptions (in that case we are implementing methods that won't be used in this project)
- move validation service to database module and rename it to something similiar to database naming convention (as this service validates only database errors)
- move mocks higher away from utils module
- remove utils module
## Teachability, Documentation, Adoption, Migration Strategy
<!-- If you can, explain how users will be able to use this and possibly write out a version the docs. Maybe a screenshot or design? -->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. --> | non_test | refactor utils module feature request is your feature request related to a problem please describe describe the solution you d like remove rpcexceptionservice i have not seen anyone using solution similar to that and it is also not efficient if your microservice does not use all provided exceptions in that case we are implementing methods that won t be used in this project move validation service to database module and rename it to something similiar to database naming convention as this service validates only database errors move mocks higher away from utils module remove utils module teachability documentation adoption migration strategy what is the motivation use case for changing the behavior | 0 |
296,925 | 25,584,693,690 | IssuesEvent | 2022-12-01 08:23:38 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | ccl/sqlproxyccl: TestCancelQuery failed | C-test-failure O-robot branch-master | ccl/sqlproxyccl.TestCancelQuery [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7784747?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7784747?buildTab=artifacts#/) on master @ [db71e045cc3f2937bf20170d254dd7e54a620ffb](https://github.com/cockroachdb/cockroach/commits/db71e045cc3f2937bf20170d254dd7e54a620ffb):
```
=== RUN TestCancelQuery
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/a455c59caed0cf9d0a0298ad88bfe7a2/logTestCancelQuery177220090
test_log_scope.go:79: use -show-logs to present logs inline
=== CONT TestCancelQuery
proxy_handler_test.go:1246: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/a455c59caed0cf9d0a0298ad88bfe7a2/logTestCancelQuery177220090
--- FAIL: TestCancelQuery (48.18s)
=== RUN TestCancelQuery/ignore_unknown_secret_key
proxy_handler_test.go:996: condition failed to evaluate within 45s: expected metrics to update, got: QueryCancelSuccessful=1, QueryCancelIgnored=0 QueryCancelForwarded=0 QueryCancelReceivedPGWire=0 QueryCancelReceivedHTTP=0
--- FAIL: TestCancelQuery/ignore_unknown_secret_key (45.12s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-experience @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCancelQuery.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | ccl/sqlproxyccl: TestCancelQuery failed - ccl/sqlproxyccl.TestCancelQuery [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7784747?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7784747?buildTab=artifacts#/) on master @ [db71e045cc3f2937bf20170d254dd7e54a620ffb](https://github.com/cockroachdb/cockroach/commits/db71e045cc3f2937bf20170d254dd7e54a620ffb):
```
=== RUN TestCancelQuery
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/a455c59caed0cf9d0a0298ad88bfe7a2/logTestCancelQuery177220090
test_log_scope.go:79: use -show-logs to present logs inline
=== CONT TestCancelQuery
proxy_handler_test.go:1246: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/a455c59caed0cf9d0a0298ad88bfe7a2/logTestCancelQuery177220090
--- FAIL: TestCancelQuery (48.18s)
=== RUN TestCancelQuery/ignore_unknown_secret_key
proxy_handler_test.go:996: condition failed to evaluate within 45s: expected metrics to update, got: QueryCancelSuccessful=1, QueryCancelIgnored=0 QueryCancelForwarded=0 QueryCancelReceivedPGWire=0 QueryCancelReceivedHTTP=0
--- FAIL: TestCancelQuery/ignore_unknown_secret_key (45.12s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-experience @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCancelQuery.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | ccl sqlproxyccl testcancelquery failed ccl sqlproxyccl testcancelquery with on master run testcancelquery test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline cont testcancelquery proxy handler test go test log scope end test logs left over in artifacts tmp tmp fail testcancelquery run testcancelquery ignore unknown secret key proxy handler test go condition failed to evaluate within expected metrics to update got querycancelsuccessful querycancelignored querycancelforwarded querycancelreceivedpgwire querycancelreceivedhttp fail testcancelquery ignore unknown secret key parameters tags bazel gss help see also cc cockroachdb sql experience cockroachdb server | 1 |
99,677 | 8,708,076,053 | IssuesEvent | 2018-12-06 09:52:56 | club-soda/club-soda-guide | https://api.github.com/repos/club-soda/club-soda-guide | closed | Filter Beer, Wine, Cider and Spirits by ABV | Nisha - Consumer please-test priority-3 question | As a customer viewing all drinks,
I'd like to filter Beer, Wine, Cider and Spirits by ABV levels: 0.05, 0.5, 1-2.5 and 2.5 - 8%,
so I only see drinks I am comfortable drinking | 1.0 | Filter Beer, Wine, Cider and Spirits by ABV - As a customer viewing all drinks,
I'd like to filter Beer, Wine, Cider and Spirits by ABV levels: 0.05, 0.5, 1-2.5 and 2.5 - 8%,
so I only see drinks I am comfortable drinking | test | filter beer wine cider and spirits by abv as a customer viewing all drinks i d like to filter beer wine cider and spirits by abv levels and so i only see drinks i am comfortable drinking | 1 |
79,932 | 15,303,570,523 | IssuesEvent | 2021-02-24 15:56:54 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Multiplayer - Client Ghost Damage when crashing into walls/spires | Bug Code Medium Prio Networking | - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Sometimes when crashing into level walls and spires, your sub may not be actually damaged enough to leak, but the clients might think it is. The damage is visible to the clients, but is not there on the server. Welding the walls on the client shows no repair, but is repaired otherwise.
**Version**
~0.12.0.2 | 1.0 | Multiplayer - Client Ghost Damage when crashing into walls/spires - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Sometimes when crashing into level walls and spires, your sub may not be actually damaged enough to leak, but the clients might think it is. The damage is visible to the clients, but is not there on the server. Welding the walls on the client shows no repair, but is repaired otherwise.
**Version**
~0.12.0.2 | non_test | multiplayer client ghost damage when crashing into walls spires i have searched the issue tracker to check if the issue has already been reported description sometimes when crashing into level walls and spires your sub may not be actually damaged enough to leak but the clients might think it is the damage is visible to the clients but is not there on the server welding the walls on the client shows no repair but is repaired otherwise version | 0 |
22,928 | 20,624,367,602 | IssuesEvent | 2022-03-07 20:47:00 | Leafwing-Studios/leafwing_2d | https://api.github.com/repos/Leafwing-Studios/leafwing_2d | opened | Implement Component and useful methods for `Option<Direction>` | usability | This is a common pattern, and has a clear meaning as a component. | True | Implement Component and useful methods for `Option<Direction>` - This is a common pattern, and has a clear meaning as a component. | non_test | implement component and useful methods for option this is a common pattern and has a clear meaning as a component | 0 |
30,351 | 13,233,394,075 | IssuesEvent | 2020-08-18 14:44:30 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | node_modules.tar.gz missing? | Pri2 app-service/svc cxp product-question triaged | Hello,
when following the guideline i get an error message after deploying to azure app services.
Please have a look on the attached logs.
What can i do for solving this issue?
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
2020-08-13T07:20:34.486954994Z _____
2020-08-13T07:20:34.486986796Z / _ \ __________ _________ ____
2020-08-13T07:20:34.486992296Z / /_\ \___ / | \_ __ \_/ __ \
2020-08-13T07:20:34.486996397Z / | \/ /| | /| | \/\ ___/
2020-08-13T07:20:34.487000097Z \____|__ /_____ \____/ |__| \___ >
2020-08-13T07:20:34.487004097Z \/ \/ \/
2020-08-13T07:20:34.487007797Z A P P S E R V I C E O N L I N U X
2020-08-13T07:20:34.487011398Z
2020-08-13T07:20:34.487014698Z Documentation: http://aka.ms/webapp-linux
2020-08-13T07:20:34.487018098Z NodeJS quickstart: https://aka.ms/node-qs
2020-08-13T07:20:34.487021598Z NodeJS Version : v10.19.0
2020-08-13T07:20:34.487024999Z Note: Any data outside '/home' is not persisted
2020-08-13T07:20:34.487028599Z
2020-08-13T07:20:35.228031311Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2020-08-13T07:20:35.230085855Z Build Operation ID: |p2MDwwc757s=.7fecfc40_
2020-08-13T07:20:36.521336590Z Writing output script to '/opt/startup/startup.sh'
2020-08-13T07:20:37.198599928Z Running #!/bin/sh
2020-08-13T07:20:37.198935252Z
2020-08-13T07:20:37.198944052Z # Enter the source directory to make sure the script runs where the user expects
2020-08-13T07:20:37.198948653Z cd "/home/site/wwwroot"
2020-08-13T07:20:37.198952553Z
2020-08-13T07:20:37.198956353Z export NODE_PATH=$(npm root --quiet -g):$NODE_PATH
2020-08-13T07:20:37.199196170Z if [ -z "$PORT" ]; then
2020-08-13T07:20:37.199203470Z export PORT=8080
2020-08-13T07:20:37.199207871Z fi
2020-08-13T07:20:37.199211371Z
2020-08-13T07:20:37.199427186Z echo Found tar.gz based node_modules.
2020-08-13T07:20:37.199434487Z extractionCommand="tar -xzf node_modules.tar.gz -C /node_modules"
2020-08-13T07:20:37.199438687Z echo "Removing existing modules directory from root..."
2020-08-13T07:20:37.199442587Z rm -fr /node_modules
2020-08-13T07:20:37.200493961Z mkdir -p /node_modules
2020-08-13T07:20:37.200504262Z echo Extracting modules...
2020-08-13T07:20:37.200508462Z $extractionCommand
2020-08-13T07:20:37.201458729Z export NODE_PATH="/node_modules":$NODE_PATH
2020-08-13T07:20:37.201469730Z export PATH=/node_modules/.bin:$PATH
2020-08-13T07:20:37.201473930Z if [ -d node_modules ]; then
2020-08-13T07:20:37.201477630Z mv -f node_modules _del_node_modules || true
2020-08-13T07:20:37.201481630Z fi
2020-08-13T07:20:37.201485131Z
2020-08-13T07:20:37.207878079Z if [ -d /node_modules ]; then
2020-08-13T07:20:37.207905281Z ln -sfn /node_modules ./node_modules
2020-08-13T07:20:37.207910082Z fi
2020-08-13T07:20:37.207913582Z
2020-08-13T07:20:37.208191201Z echo "Done."
2020-08-13T07:20:37.208354313Z pm2 start --no-daemon /opt/startup/default-static-site.js
2020-08-13T07:20:38.893478889Z Found tar.gz based node_modules.
2020-08-13T07:20:38.899765435Z Removing existing modules directory from root...
2020-08-13T07:20:38.939535950Z Extracting modules...
2020-08-13T07:20:38.990510160Z tar (child): node_modules.tar.gz: Cannot open: No such file or directory
2020-08-13T07:20:38.991139704Z tar (child): Error is not recoverable: exiting now
2020-08-13T07:20:39.007081833Z tar: Child returned status 2
2020-08-13T07:20:39.007099134Z tar: Error is not recoverable: exiting now
2020-08-13T07:20:39.020012148Z Done.
'''''''''''''''''''''''''''''''''''''''''''''''''
Yours Nils
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: d4ce9745-5765-dae3-2d5b-2a24cfd787bb
* Version Independent ID: 8d21fd6b-15a4-dc2a-c8c2-fc0e04c15f7f
* Content: [Tutorial: Node.js app with MongoDB - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/tutorial-nodejs-mongodb-app?pivots=platform-linux)
* Content Source: [articles/app-service/tutorial-nodejs-mongodb-app.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/tutorial-nodejs-mongodb-app.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | node_modules.tar.gz missing? - Hello,
when following the guideline i get an error message after deploying to azure app services.
Please have a look on the attached logs.
What can i do for solving this issue?
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
2020-08-13T07:20:34.486954994Z _____
2020-08-13T07:20:34.486986796Z / _ \ __________ _________ ____
2020-08-13T07:20:34.486992296Z / /_\ \___ / | \_ __ \_/ __ \
2020-08-13T07:20:34.486996397Z / | \/ /| | /| | \/\ ___/
2020-08-13T07:20:34.487000097Z \____|__ /_____ \____/ |__| \___ >
2020-08-13T07:20:34.487004097Z \/ \/ \/
2020-08-13T07:20:34.487007797Z A P P S E R V I C E O N L I N U X
2020-08-13T07:20:34.487011398Z
2020-08-13T07:20:34.487014698Z Documentation: http://aka.ms/webapp-linux
2020-08-13T07:20:34.487018098Z NodeJS quickstart: https://aka.ms/node-qs
2020-08-13T07:20:34.487021598Z NodeJS Version : v10.19.0
2020-08-13T07:20:34.487024999Z Note: Any data outside '/home' is not persisted
2020-08-13T07:20:34.487028599Z
2020-08-13T07:20:35.228031311Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2020-08-13T07:20:35.230085855Z Build Operation ID: |p2MDwwc757s=.7fecfc40_
2020-08-13T07:20:36.521336590Z Writing output script to '/opt/startup/startup.sh'
2020-08-13T07:20:37.198599928Z Running #!/bin/sh
2020-08-13T07:20:37.198935252Z
2020-08-13T07:20:37.198944052Z # Enter the source directory to make sure the script runs where the user expects
2020-08-13T07:20:37.198948653Z cd "/home/site/wwwroot"
2020-08-13T07:20:37.198952553Z
2020-08-13T07:20:37.198956353Z export NODE_PATH=$(npm root --quiet -g):$NODE_PATH
2020-08-13T07:20:37.199196170Z if [ -z "$PORT" ]; then
2020-08-13T07:20:37.199203470Z export PORT=8080
2020-08-13T07:20:37.199207871Z fi
2020-08-13T07:20:37.199211371Z
2020-08-13T07:20:37.199427186Z echo Found tar.gz based node_modules.
2020-08-13T07:20:37.199434487Z extractionCommand="tar -xzf node_modules.tar.gz -C /node_modules"
2020-08-13T07:20:37.199438687Z echo "Removing existing modules directory from root..."
2020-08-13T07:20:37.199442587Z rm -fr /node_modules
2020-08-13T07:20:37.200493961Z mkdir -p /node_modules
2020-08-13T07:20:37.200504262Z echo Extracting modules...
2020-08-13T07:20:37.200508462Z $extractionCommand
2020-08-13T07:20:37.201458729Z export NODE_PATH="/node_modules":$NODE_PATH
2020-08-13T07:20:37.201469730Z export PATH=/node_modules/.bin:$PATH
2020-08-13T07:20:37.201473930Z if [ -d node_modules ]; then
2020-08-13T07:20:37.201477630Z mv -f node_modules _del_node_modules || true
2020-08-13T07:20:37.201481630Z fi
2020-08-13T07:20:37.201485131Z
2020-08-13T07:20:37.207878079Z if [ -d /node_modules ]; then
2020-08-13T07:20:37.207905281Z ln -sfn /node_modules ./node_modules
2020-08-13T07:20:37.207910082Z fi
2020-08-13T07:20:37.207913582Z
2020-08-13T07:20:37.208191201Z echo "Done."
2020-08-13T07:20:37.208354313Z pm2 start --no-daemon /opt/startup/default-static-site.js
2020-08-13T07:20:38.893478889Z Found tar.gz based node_modules.
2020-08-13T07:20:38.899765435Z Removing existing modules directory from root...
2020-08-13T07:20:38.939535950Z Extracting modules...
2020-08-13T07:20:38.990510160Z tar (child): node_modules.tar.gz: Cannot open: No such file or directory
2020-08-13T07:20:38.991139704Z tar (child): Error is not recoverable: exiting now
2020-08-13T07:20:39.007081833Z tar: Child returned status 2
2020-08-13T07:20:39.007099134Z tar: Error is not recoverable: exiting now
2020-08-13T07:20:39.020012148Z Done.
'''''''''''''''''''''''''''''''''''''''''''''''''
Yours Nils
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: d4ce9745-5765-dae3-2d5b-2a24cfd787bb
* Version Independent ID: 8d21fd6b-15a4-dc2a-c8c2-fc0e04c15f7f
* Content: [Tutorial: Node.js app with MongoDB - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/tutorial-nodejs-mongodb-app?pivots=platform-linux)
* Content Source: [articles/app-service/tutorial-nodejs-mongodb-app.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/tutorial-nodejs-mongodb-app.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_test | node modules tar gz missing hello when following the guideline i get an error message after deploying to azure app services please have a look on the attached logs what can i do for solving this issue a p p s e r v i c e o n l i n u x documentation nodejs quickstart nodejs version note any data outside home is not persisted found build manifest file at home site wwwroot oryx manifest toml deserializing it build operation id writing output script to opt startup startup sh running bin sh enter the source directory to make sure the script runs where the user expects cd home site wwwroot export node path npm root quiet g node path if then export port fi echo found tar gz based node modules extractioncommand tar xzf node modules tar gz c node modules echo removing existing modules directory from root rm fr node modules mkdir p node modules echo extracting modules extractioncommand export node path node modules node path export path node modules bin path if then mv f node modules del node modules true fi if then ln sfn node modules node modules fi echo done start no daemon opt startup default static site js found tar gz based node modules removing existing modules directory from root extracting modules tar child node modules tar gz cannot open no such file or directory tar child error is not recoverable exiting now tar child returned status tar error is not recoverable exiting now done yours nils document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin | 0 |
123,947 | 16,551,534,796 | IssuesEvent | 2021-05-28 09:08:31 | google/web-stories-wp | https://api.github.com/repos/google/web-stories-wp | closed | Design System: Input Variant | Group: Design System Pod: Pea Type: Enhancement | ## Feature Description
Now that the editor's getting redesigned (yay!) we're noticing some additions to the new designs that weren't sketched out as components in the design system that we originally based all of our work on. There are inputs in the editor that have a label (either text based or icon w/ aria-label) that are right aligned inside the input.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
Screenshots of panels in[ figma](https://www.figma.com/file/bMhG3KyrJF8vIAODgmbeqT/Design-System?node-id=221%3A13978) as of Feb 11, 2021

---
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance Criteria
- Input variant should be added to design system
- Should have storybook
- Any relevant tests
- Should have RTL version
- Should be accessible
- Should meet i18n requirements
## Implementation Brief
This can probably be a variant on the existing input.
| 1.0 | Design System: Input Variant - ## Feature Description
Now that the editor's getting redesigned (yay!) we're noticing some additions to the new designs that weren't sketched out as components in the design system that we originally based all of our work on. There are inputs in the editor that have a label (either text based or icon w/ aria-label) that are right aligned inside the input.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
Screenshots of panels in[ figma](https://www.figma.com/file/bMhG3KyrJF8vIAODgmbeqT/Design-System?node-id=221%3A13978) as of Feb 11, 2021

---
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance Criteria
- Input variant should be added to design system
- Should have storybook
- Any relevant tests
- Should have RTL version
- Should be accessible
- Should meet i18n requirements
## Implementation Brief
This can probably be a variant on the existing input.
| non_test | design system input variant feature description now that the editor s getting redesigned yay we re noticing some additions to the new designs that weren t sketched out as components in the design system that we originally based all of our work on there are inputs in the editor that have a label either text based or icon w aria label that are right aligned inside the input alternatives considered additional context screenshots of panels in as of feb do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria input variant should be added to design system should have storybook any relevant tests should have rtl version should be accessible should meet requirements implementation brief this can probably be a variant on the existing input | 0 |
283,488 | 21,316,648,058 | IssuesEvent | 2022-04-16 11:53:48 | spencernah/pe | https://api.github.com/repos/spencernah/pe | opened | In UG, explanation in create relation among persons is confusing | severity.Low type.DocumentationBug | The below explanation from the "relate" command may be too technical to be in a User Guide which might not be helpful for a non-tehnical user.
The relate command has a 1 to n relationship
TO_SEQ_NO_OF_CONTACT on the left hand side of <- is the target whom persons
going to relate to
FROM_SEQ_NO_OF_CONTACTi on the right hand side of <- is the group to relate to
the target one by one
<!--session: 1650104048631-0cd6c1d3-70f0-4888-a90c-34f0e9b1f41b-->
<!--Version: Web v3.4.2--> | 1.0 | In UG, explanation in create relation among persons is confusing - The below explanation from the "relate" command may be too technical to be in a User Guide which might not be helpful for a non-tehnical user.
The relate command has a 1 to n relationship
TO_SEQ_NO_OF_CONTACT on the left hand side of <- is the target whom persons
going to relate to
FROM_SEQ_NO_OF_CONTACTi on the right hand side of <- is the group to relate to
the target one by one
<!--session: 1650104048631-0cd6c1d3-70f0-4888-a90c-34f0e9b1f41b-->
<!--Version: Web v3.4.2--> | non_test | in ug explanation in create relation among persons is confusing the below explanation from the relate command may be too technical to be in a user guide which might not be helpful for a non tehnical user the relate command has a to n relationship to seq no of contact on the left hand side of is the target whom persons going to relate to from seq no of contacti on the right hand side of is the group to relate to the target one by one | 0 |
124,849 | 10,325,834,521 | IssuesEvent | 2019-09-01 20:45:06 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | Undefined variable: submissions after 2.15 update | Ready To Test Regression | [//]: # ( Invisible comment:
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
Before you create the issue:
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
Search for similar report among other reported issues.
Learn how to troubleshoot at https://www.mautic.org/docs/en/tips/troubleshooting.html
Use drag&drop to attach images or other files )
## Bug Description
Since I upgraded to Mautic 2.15 I noticed the 'undefined variable' errors in the Mautic log (see below).
They appear when a focus item is edited or display. But The focus popups seem to work, and
I hope that I won't discover other side effects.
| Q | A
| --- | ---
| Mautic version | 2.15
| PHP version | 7.0.33
| Browser | Any
### Steps to reproduce
1. Upgrade Mautic from 2.14 to 2.15
2. Display a Mautic focus item with attached form from an external site
3. Or edit a focus item, the errors are also displayed when you click 'save and close'
### Log errors
[2019-01-16 09:58:41] mautic.NOTICE: PHP Notice - Undefined variable: submissions - in file /www/mautic/app/bundles/FormBundle/Views/Builder/form.html.php - at line 59 [] []
[2019-01-16 09:58:41] mautic.NOTICE: PHP Notice - Undefined variable: lead - in file /www/mautic/app/bundles/FormBundle/Views/Builder/form.html.php - at line 59 [] []
[//]: # ( Invisible comment:
Please check for related errors in the latest log file in [mautic root]/app/log/ and/or the web server's logs and post them here. Be sure to remove sensitive information if applicable. )
| 1.0 | Undefined variable: submissions after 2.15 update - [//]: # ( Invisible comment:
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
Before you create the issue:
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
Search for similar report among other reported issues.
Learn how to troubleshoot at https://www.mautic.org/docs/en/tips/troubleshooting.html
Use drag&drop to attach images or other files )
## Bug Description
Since I upgraded to Mautic 2.15 I noticed the 'undefined variable' errors in the Mautic log (see below).
They appear when a focus item is edited or display. But The focus popups seem to work, and
I hope that I won't discover other side effects.
| Q | A
| --- | ---
| Mautic version | 2.15
| PHP version | 7.0.33
| Browser | Any
### Steps to reproduce
1. Upgrade Mautic from 2.14 to 2.15
2. Display a Mautic focus item with attached form from an external site
3. Or edit a focus item, the errors are also displayed when you click 'save and close'
### Log errors
[2019-01-16 09:58:41] mautic.NOTICE: PHP Notice - Undefined variable: submissions - in file /www/mautic/app/bundles/FormBundle/Views/Builder/form.html.php - at line 59 [] []
[2019-01-16 09:58:41] mautic.NOTICE: PHP Notice - Undefined variable: lead - in file /www/mautic/app/bundles/FormBundle/Views/Builder/form.html.php - at line 59 [] []
[//]: # ( Invisible comment:
Please check for related errors in the latest log file in [mautic root]/app/log/ and/or the web server's logs and post them here. Be sure to remove sensitive information if applicable. )
| test | undefined variable submissions after update invisible comment iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii before you create the issue iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii search for similar report among other reported issues learn how to troubleshoot at use drag drop to attach images or other files bug description since i upgraded to mautic i noticed the undefined variable errors in the mautic log see below they appear when a focus item is edited or display but the focus popups seem to work and i hope that i won t discover other side effects q a mautic version php version browser any steps to reproduce upgrade mautic from to display a mautic focus item with attached form from an external site or edit a focus item the errors are also displayed when you click save and close log errors mautic notice php notice undefined variable submissions in file www mautic app bundles formbundle views builder form html php at line mautic notice php notice undefined variable lead in file www mautic app bundles formbundle views builder form html php at line invisible comment please check for related errors in the latest log file in app log and or the web server s logs and post them here be sure to remove sensitive information if applicable | 1 |
281,422 | 24,392,145,348 | IssuesEvent | 2022-10-04 16:01:06 | uclab-potsdam/klimataz | https://api.github.com/repos/uclab-potsdam/klimataz | opened | wrong Landkreis opens when clicking on list | bug feedback from testing | sometimes, not the landkreis that was clicked on in the list on the backside opens, but another one.
(not sure how often though. In the incident, he clicked on Hildesheim and Oder-Spree opened). | 1.0 | wrong Landkreis opens when clicking on list - sometimes, not the landkreis that was clicked on in the list on the backside opens, but another one.
(not sure how often though. In the incident, he clicked on Hildesheim and Oder-Spree opened). | test | wrong landkreis opens when clicking on list sometimes not the landkreis that was clicked on in the list on the backside opens but another one not sure how often though in the incident he clicked on hildesheim and oder spree opened | 1 |
21,725 | 3,917,329,613 | IssuesEvent | 2016-04-21 07:49:03 | gtoubiana/acte | https://api.github.com/repos/gtoubiana/acte | opened | Le script ne fonctionne plus sous IE8/win7 via SAUCELABS | issue: bug scope: tests type: fix | ```
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this : un objet FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.od : un objet Date grรฉgorienne (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.ac : l'annรฉe grรฉgorienne en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.mc : le mois grรฉgorien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.jmc : le jour du mois grรฉgorien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.od : un objet Date julienne (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.ac : l'annรฉe julienne en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.mc : le mois julien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.jmc : le jour du mois julien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.jj : le nombre de jours juliens FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.ac : l'annรฉe rรฉpublicaine en chiffres (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.mc : le mois rรฉpublicain en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.jmc : le jour du mois rรฉpublicain en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.dc : la dรฉcade rรฉpublicaine en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.jdc : le jour de la dรฉcade rรฉpublicaine en chiffres FAILED
TypeError: Cannot call a class as a function
``` | 1.0 | Le script ne fonctionne plus sous IE8/win7 via SAUCELABS - ```
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this : un objet FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.od : un objet Date grรฉgorienne (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.ac : l'annรฉe grรฉgorienne en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.mc : le mois grรฉgorien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.gregorien.jmc : le jour du mois grรฉgorien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.od : un objet Date julienne (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.ac : l'annรฉe julienne en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.mc : le mois julien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.jmc : le jour du mois julien en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.julien.jj : le nombre de jours juliens FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.ac : l'annรฉe rรฉpublicaine en chiffres (ou Undefined) FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.mc : le mois rรฉpublicain en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.jmc : le jour du mois rรฉpublicain en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.dc : la dรฉcade rรฉpublicaine en chiffres FAILED
TypeError: Cannot call a class as a function
IE 8.0.0 (Windows 7 0.0.0) Constructeur acte.Jour() retourne this.variables.republicain.jdc : le jour de la dรฉcade rรฉpublicaine en chiffres FAILED
TypeError: Cannot call a class as a function
``` | test | le script ne fonctionne plus sous via saucelabs ie windows constructeur acte jour retourne this un objet failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables gregorien od un objet date grรฉgorienne ou undefined failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables gregorien ac l annรฉe grรฉgorienne en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables gregorien mc le mois grรฉgorien en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables gregorien jmc le jour du mois grรฉgorien en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables julien od un objet date julienne ou undefined failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables julien ac l annรฉe julienne en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables julien mc le mois julien en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables julien jmc le jour du mois julien en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables julien jj le nombre de jours juliens failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables republicain ac l annรฉe rรฉpublicaine en chiffres ou undefined failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables republicain mc le mois rรฉpublicain en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables republicain jmc le jour du mois rรฉpublicain en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables republicain dc la dรฉcade rรฉpublicaine en chiffres failed typeerror cannot call a class as a function ie windows constructeur acte jour retourne this variables republicain jdc le jour de la dรฉcade rรฉpublicaine en chiffres failed typeerror cannot call a class as a function | 1 |
416,510 | 12,147,701,299 | IssuesEvent | 2020-04-24 13:27:41 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | closed | Create project based security matrix for Jenkins | Airtable Done dev priority: high | <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS Jenkins user
> I need to:
see the projects I am only permitted to see in Jenkins
> so that I can:
fulfill my devops tasks without access to tasks I am not permitted to see
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- This has already been completed, just creating an issue to track this
#### Acceptance criteria
- [x] Update Jenkins to project based security matrix
- [x] Create bluegroups as needed
- [x] Allow inheritance of Jenkins-wide access, and create bluegroups for project specific access
- [x] Create bluegroup for IBM.com Reboot and only provide access for IBM.com Reboot specific tasks | 1.0 | Create project based security matrix for Jenkins - <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS Jenkins user
> I need to:
see the projects I am only permitted to see in Jenkins
> so that I can:
fulfill my devops tasks without access to tasks I am not permitted to see
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- This has already been completed, just creating an issue to track this
#### Acceptance criteria
- [x] Update Jenkins to project based security matrix
- [x] Create bluegroups as needed
- [x] Allow inheritance of Jenkins-wide access, and create bluegroups for project specific access
- [x] Create bluegroup for IBM.com Reboot and only provide access for IBM.com Reboot specific tasks | non_test | create project based security matrix for jenkins user story as a dds jenkins user i need to see the projects i am only permitted to see in jenkins so that i can fulfill my devops tasks without access to tasks i am not permitted to see additional information this has already been completed just creating an issue to track this acceptance criteria update jenkins to project based security matrix create bluegroups as needed allow inheritance of jenkins wide access and create bluegroups for project specific access create bluegroup for ibm com reboot and only provide access for ibm com reboot specific tasks | 0 |
679,937 | 23,251,047,920 | IssuesEvent | 2022-08-04 03:47:33 | matrixorigin/matrixone | https://api.github.com/repos/matrixorigin/matrixone | opened | [Feature Request]: DDL Transfer tool from MySQL to MO | priority/p1 kind/feature source/on-demand | ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
Please provide a tool including these features
a. Input DDL statements from mysqldump then it will be transferred a runnable one in MO.
b. Support a parameter for case sensitive. Users can decide whether the transferred DDL to be case sensitive or not.
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_ | 1.0 | [Feature Request]: DDL Transfer tool from MySQL to MO - ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
Please provide a tool including these features
a. Input DDL statements from mysqldump then it will be transferred a runnable one in MO.
b. Support a parameter for case sensitive. Users can decide whether the transferred DDL to be case sensitive or not.
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_ | non_test | ddl transfer tool from mysql to mo is there an existing issue for the same feature request i have checked the existing issues is your feature request related to a problem no response describe the feature you d like please provide a tool including these features a input ddl statements from mysqldump then it will be transferred a runnable one in mo b support a parameter for case sensitive users can decide whether the transferred ddl to be case sensitive or not describe implementation you ve considered no response documentation adoption use case migration strategy no response additional information no response | 0 |
301,535 | 26,072,613,079 | IssuesEvent | 2022-12-24 02:21:49 | KeYProject/key | https://api.github.com/repos/KeYProject/key | closed | Test case generation hangs | ๐ Bug P:NORMAL Test Case Generator |
This issue was created at [git.key-project.org](https://git.key-project.org/key/key/-/issues/1183) where the discussions are preserved.
----
* Mantis: [MT-1016](http://i12www.ira.uka.de/~klebanov/mantis/view.php?id=1016)
* Submitted on: 2010-08-26 by @gpaganelli
* Updated: 2013-01-18
* Assigned to: @gpaganelli
### Description
> When proved method xor in the attached file and trying to generate test cases, the test case generation dialog hangs in computing the traces.
>
>
>
### Steps to reproduce
```
Load file LogicalComponents.java
Prove method xor
Generate test cases
A null-pointer exception will be thrown.
```
### Additional Information
```
1) The 1.2 version of VBT does not exhibit this bug.
2) The problem seems to happen when defining xor by means of the method or and and defined in the same class. If one writes the xor directly with the logical operators everything works fine.
3) The exception trace:
Exception in thread "Collecting methods from the proof tree" java.lang.NullPointerException
at de.uka.ilkd.key.java.reference.MethodReference.getKeYJavaType(MethodReference.java:352)
at de.uka.ilkd.key.java.TypeConverter.getKeYJavaType(TypeConverter.java:562)
at de.uka.ilkd.key.java.reference.MethodReference.getMethodSignature(MethodReference.java:262)
at de.uka.ilkd.key.java.reference.MethodReference.method(MethodReference.java:292)
at de.uka.ilkd.key.visualization.ExecutionTraceModelForTesting.getProgramMethods(ExecutionTraceModelForTesting.java:61)
at de.uka.ilkd.key.unittest.UnitTestBuilder.getProgramMethods(UnitTestBuilder.java:437)
at de.uka.ilkd.key.unittest.UnitTestBuilder.getProgramMethods(UnitTestBuilder.java:116)
at de.uka.ilkd.key.unittest.UnitTestBuilderGUIInterface$MethodListComputation.run(UnitTestBuilderGUIInterface.java:178)
at java.lang.Thread.run(Thread.java:619)
```
## Files
`LogicalComponents.java`:
```java
public class LogicalComponents {
/*@
requires true;
ensures \result == (a&b);
@*/
public static boolean and(boolean a, boolean b){
return a & b;
}
/*@
requires true;
ensures \result == (a|b);
@*/
public static boolean or(boolean a, boolean b){
return a | b;
}
/*@
requires true;
ensures \result == ((a&!b)|(!a&b));
@*/
public static boolean xor(boolean a, boolean b){
return or(and(!a,b),and(a,!b));
//return (!a&b)|(a&!b);
}
/*@
requires true;
ensures \result == (a&&b);
@*/
public static boolean shortCircuitAnd(boolean a, boolean b){
return a && b;
}
/*@
requires true;
ensures \result == (a||b);
@*/
public static boolean shortCircuitOr(boolean a, boolean b){
return a || b;
}
/*@
requires true;
ensures \result == ((a&&!b)||(!a&&b));
@*/
public static boolean shortCircuitXor(boolean a, boolean b){
return (!a&&b) || (a&&!b);
}
}
```

[GeneratedTestFiles.zip](/uploads/1ad46704a41ed2f041cccb9a3195600f/GeneratedTestFiles.zip)
## Notes
_@gladisch at 2010-08-27_
> I do not get this exception, so there is a way to generate your desired test (see the attached Screenshot2.png and GeneratedTestFiles.zip). There are certainly bugs in VBT but I'm not concentrating on them now but rather how you get your desired tests. So here is what I did.
>
> My KeY-Version: KeY Version 1.5.1724 (internal: 73173281dd7151d355de3b847cd8e0d410b6974b)
> 1. ./runProver (without any options)
> 2. Load LogicalComponents.java, select method xor, select ensuresPost.
> 3. Run prover to close the proof. See attached screenshot for strategy configuration.
> 4. Menu: Tools -> Create Unittests:
> 4.1 select the method xor; use the settings as on the screenshot.
> 4.2 Press the buttong "Create Test for Proof"
> 5. In the dialog "Select Test Case" select the desired test file
>
> Hint: If you still have problems, then delete the ".key" directory in your home directory. This helps often but I don't know why.
_@gpaganelli at 2010-09-24_
> Solved in d3c6d292671158bbbfe69531addccda1745830e1
## History
* @gpaganelli -- (`NEW_BUG`) _2010-08-26_
* @gpaganelli -- (`FILE_ADDED`) _2010-08-26_
* @gladisch -- (`FILE_ADDED`) _2010-08-27_
* @gladisch -- (`FILE_ADDED`) _2010-08-27_
* @gladisch -- (`BUGNOTE_ADDED`) _2010-08-27_
* @gpaganelli -- (`DESCRIPTION_UPDATED`) _2010-09-24_
* @gpaganelli -- (`STEP_TO_REPRODUCE_UPDATED`) _2010-09-24_
* @gpaganelli -- (`ADDITIONAL_INFO_UPDATED`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_ADDED`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_DELETED`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_ADDED`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @grahl -- (`NORMAL_TYPE`) _2013-01-18_
## Attributes
* Category: Test Case Generator
* Status: CLOSED
* Severity: MAJOR
* OS:
* Target Version:
* Resolution: FIXED
* Priority: NORMAL
* Reproducibility: ALWAYS
* Platform:
* Commit: None
* Build:
* Tags []
* Labels: ~Test Case Generator ~Bug ~NORMAL
---
[View in Mantis](http://i12www.ira.uka.de/~klebanov/mantis/view.php?id=1016)
----
Information:
* created_at: 2017-05-29T02:58:31.217Z
* updated_at: 2017-05-29T02:58:31.815Z
* closed_at: 2017-05-29T02:58:31.779Z (closed_by: )
* milestone:
* user_notes_count: 0
| 1.0 | Test case generation hangs -
This issue was created at [git.key-project.org](https://git.key-project.org/key/key/-/issues/1183) where the discussions are preserved.
----
* Mantis: [MT-1016](http://i12www.ira.uka.de/~klebanov/mantis/view.php?id=1016)
* Submitted on: 2010-08-26 by @gpaganelli
* Updated: 2013-01-18
* Assigned to: @gpaganelli
### Description
> When proved method xor in the attached file and trying to generate test cases, the test case generation dialog hangs in computing the traces.
>
>
>
### Steps to reproduce
```
Load file LogicalComponents.java
Prove method xor
Generate test cases
A null-pointer exception will be thrown.
```
### Additional Information
```
1) The 1.2 version of VBT does not exhibit this bug.
2) The problem seems to happen when defining xor by means of the method or and and defined in the same class. If one writes the xor directly with the logical operators everything works fine.
3) The exception trace:
Exception in thread "Collecting methods from the proof tree" java.lang.NullPointerException
at de.uka.ilkd.key.java.reference.MethodReference.getKeYJavaType(MethodReference.java:352)
at de.uka.ilkd.key.java.TypeConverter.getKeYJavaType(TypeConverter.java:562)
at de.uka.ilkd.key.java.reference.MethodReference.getMethodSignature(MethodReference.java:262)
at de.uka.ilkd.key.java.reference.MethodReference.method(MethodReference.java:292)
at de.uka.ilkd.key.visualization.ExecutionTraceModelForTesting.getProgramMethods(ExecutionTraceModelForTesting.java:61)
at de.uka.ilkd.key.unittest.UnitTestBuilder.getProgramMethods(UnitTestBuilder.java:437)
at de.uka.ilkd.key.unittest.UnitTestBuilder.getProgramMethods(UnitTestBuilder.java:116)
at de.uka.ilkd.key.unittest.UnitTestBuilderGUIInterface$MethodListComputation.run(UnitTestBuilderGUIInterface.java:178)
at java.lang.Thread.run(Thread.java:619)
```
## Files
`LogicalComponents.java`:
```java
public class LogicalComponents {
/*@
requires true;
ensures \result == (a&b);
@*/
public static boolean and(boolean a, boolean b){
return a & b;
}
/*@
requires true;
ensures \result == (a|b);
@*/
public static boolean or(boolean a, boolean b){
return a | b;
}
/*@
requires true;
ensures \result == ((a&!b)|(!a&b));
@*/
public static boolean xor(boolean a, boolean b){
return or(and(!a,b),and(a,!b));
//return (!a&b)|(a&!b);
}
/*@
requires true;
ensures \result == (a&&b);
@*/
public static boolean shortCircuitAnd(boolean a, boolean b){
return a && b;
}
/*@
requires true;
ensures \result == (a||b);
@*/
public static boolean shortCircuitOr(boolean a, boolean b){
return a || b;
}
/*@
requires true;
ensures \result == ((a&&!b)||(!a&&b));
@*/
public static boolean shortCircuitXor(boolean a, boolean b){
return (!a&&b) || (a&&!b);
}
}
```

[GeneratedTestFiles.zip](/uploads/1ad46704a41ed2f041cccb9a3195600f/GeneratedTestFiles.zip)
## Notes
_@gladisch at 2010-08-27_
> I do not get this exception, so there is a way to generate your desired test (see the attached Screenshot2.png and GeneratedTestFiles.zip). There are certainly bugs in VBT but I'm not concentrating on them now but rather how you get your desired tests. So here is what I did.
>
> My KeY-Version: KeY Version 1.5.1724 (internal: 73173281dd7151d355de3b847cd8e0d410b6974b)
> 1. ./runProver (without any options)
> 2. Load LogicalComponents.java, select method xor, select ensuresPost.
> 3. Run prover to close the proof. See attached screenshot for strategy configuration.
> 4. Menu: Tools -> Create Unittests:
> 4.1 select the method xor; use the settings as on the screenshot.
> 4.2 Press the buttong "Create Test for Proof"
> 5. In the dialog "Select Test Case" select the desired test file
>
> Hint: If you still have problems, then delete the ".key" directory in your home directory. This helps often but I don't know why.
_@gpaganelli at 2010-09-24_
> Solved in d3c6d292671158bbbfe69531addccda1745830e1
## History
* @gpaganelli -- (`NEW_BUG`) _2010-08-26_
* @gpaganelli -- (`FILE_ADDED`) _2010-08-26_
* @gladisch -- (`FILE_ADDED`) _2010-08-27_
* @gladisch -- (`FILE_ADDED`) _2010-08-27_
* @gladisch -- (`BUGNOTE_ADDED`) _2010-08-27_
* @gpaganelli -- (`DESCRIPTION_UPDATED`) _2010-09-24_
* @gpaganelli -- (`STEP_TO_REPRODUCE_UPDATED`) _2010-09-24_
* @gpaganelli -- (`ADDITIONAL_INFO_UPDATED`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_ADDED`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_DELETED`) _2010-09-24_
* @gpaganelli -- (`BUGNOTE_ADDED`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @gpaganelli -- (`NORMAL_TYPE`) _2010-09-24_
* @grahl -- (`NORMAL_TYPE`) _2013-01-18_
## Attributes
* Category: Test Case Generator
* Status: CLOSED
* Severity: MAJOR
* OS:
* Target Version:
* Resolution: FIXED
* Priority: NORMAL
* Reproducibility: ALWAYS
* Platform:
* Commit: None
* Build:
* Tags []
* Labels: ~Test Case Generator ~Bug ~NORMAL
---
[View in Mantis](http://i12www.ira.uka.de/~klebanov/mantis/view.php?id=1016)
----
Information:
* created_at: 2017-05-29T02:58:31.217Z
* updated_at: 2017-05-29T02:58:31.815Z
* closed_at: 2017-05-29T02:58:31.779Z (closed_by: )
* milestone:
* user_notes_count: 0
| test | test case generation hangs this issue was created at where the discussions are preserved mantis submitted on by gpaganelli updated assigned to gpaganelli description when proved method xor in the attached file and trying to generate test cases the test case generation dialog hangs in computing the traces steps to reproduce load file logicalcomponents java prove method xor generate test cases a null pointer exception will be thrown additional information the version of vbt does not exhibit this bug the problem seems to happen when defining xor by means of the method or and and defined in the same class if one writes the xor directly with the logical operators everything works fine the exception trace exception in thread collecting methods from the proof tree java lang nullpointerexception at de uka ilkd key java reference methodreference getkeyjavatype methodreference java at de uka ilkd key java typeconverter getkeyjavatype typeconverter java at de uka ilkd key java reference methodreference getmethodsignature methodreference java at de uka ilkd key java reference methodreference method methodreference java at de uka ilkd key visualization executiontracemodelfortesting getprogrammethods executiontracemodelfortesting java at de uka ilkd key unittest unittestbuilder getprogrammethods unittestbuilder java at de uka ilkd key unittest unittestbuilder getprogrammethods unittestbuilder java at de uka ilkd key unittest unittestbuilderguiinterface methodlistcomputation run unittestbuilderguiinterface java at java lang thread run thread java files logicalcomponents java java public class logicalcomponents requires true ensures result a b public static boolean and boolean a boolean b return a b requires true ensures result a b public static boolean or boolean a boolean b return a b requires true ensures result a b a b public static boolean xor boolean a boolean b return or and a b and a b return a b a b requires true ensures result a b public static boolean shortcircuitand boolean a boolean b return a b requires true ensures result a b public static boolean shortcircuitor boolean a boolean b return a b requires true ensures result a b a b public static boolean shortcircuitxor boolean a boolean b return a b a b uploads png uploads generatedtestfiles zip notes gladisch at i do not get this exception so there is a way to generate your desired test see the attached png and generatedtestfiles zip there are certainly bugs in vbt but i m not concentrating on them now but rather how you get your desired tests so here is what i did my key version key version internal runprover without any options load logicalcomponents java select method xor select ensurespost run prover to close the proof see attached screenshot for strategy configuration menu tools create unittests select the method xor use the settings as on the screenshot press the buttong create test for proof in the dialog select test case select the desired test file hint if you still have problems then delete the key directory in your home directory this helps often but i don t know why gpaganelli at solved in history gpaganelli new bug gpaganelli file added gladisch file added gladisch file added gladisch bugnote added gpaganelli description updated gpaganelli step to reproduce updated gpaganelli additional info updated gpaganelli normal type gpaganelli normal type gpaganelli bugnote added gpaganelli bugnote deleted gpaganelli bugnote added gpaganelli normal type gpaganelli normal type grahl normal type attributes category test case generator status closed severity major os target version resolution fixed priority normal reproducibility always platform commit none build tags labels test case generator bug normal information created at updated at closed at closed by milestone user notes count | 1 |
167,644 | 14,115,252,178 | IssuesEvent | 2020-11-07 19:56:12 | SamFan808/Chuck_Roast | https://api.github.com/repos/SamFan808/Chuck_Roast | closed | README | documentation | Have a quality README (with a unique name, description, technologies used, screenshot, and link to deployed application). | 1.0 | README - Have a quality README (with a unique name, description, technologies used, screenshot, and link to deployed application). | non_test | readme have a quality readme with a unique name description technologies used screenshot and link to deployed application | 0 |
28,179 | 23,070,873,767 | IssuesEvent | 2022-07-25 17:56:08 | sequentech/roadmap | https://api.github.com/repos/sequentech/roadmap | opened | Automate marking old issues as stale | infrastructure | We should automatically mark old issues as stale like Ory Kratos does, to manage the backlog properly. This should apply to all repositories. See the example here: https://github.com/ory/kratos/blob/master/.github/workflows/stale.yml | 1.0 | Automate marking old issues as stale - We should automatically mark old issues as stale like Ory Kratos does, to manage the backlog properly. This should apply to all repositories. See the example here: https://github.com/ory/kratos/blob/master/.github/workflows/stale.yml | non_test | automate marking old issues as stale we should automatically mark old issues as stale like ory kratos does to manage the backlog properly this should apply to all repositories see the example here | 0 |
103,373 | 11,355,121,867 | IssuesEvent | 2020-01-24 19:16:27 | opencodeiiita/NerdSpace | https://api.github.com/repos/opencodeiiita/NerdSpace | opened | Add password authentication | Points:30 documentation enhancement | Using firebase enable sign up/sign in functionality.
On successfull login make sure you also redirect to dashboard.
Make sure proper error handling is done(either by logging or alerting).
**Tip:** On successfully sign up one can see user details on firebase console. | 1.0 | Add password authentication - Using firebase enable sign up/sign in functionality.
On successfull login make sure you also redirect to dashboard.
Make sure proper error handling is done(either by logging or alerting).
**Tip:** On successfully sign up one can see user details on firebase console. | non_test | add password authentication using firebase enable sign up sign in functionality on successfull login make sure you also redirect to dashboard make sure proper error handling is done either by logging or alerting tip on successfully sign up one can see user details on firebase console | 0 |
92,007 | 8,336,265,683 | IssuesEvent | 2018-09-28 07:09:01 | jenetics/jenetics | https://api.github.com/repos/jenetics/jenetics | opened | Revise statistical tests | testing | Because of #416, which was not detected by the current tests, it is a good idea to check the existing _statistical_ tests for hidden errors. It's also a good possibility to make this tests more robust. They are still failing once in a while.
| 1.0 | Revise statistical tests - Because of #416, which was not detected by the current tests, it is a good idea to check the existing _statistical_ tests for hidden errors. It's also a good possibility to make this tests more robust. They are still failing once in a while.
| test | revise statistical tests because of which was not detected by the current tests it is a good idea to check the existing statistical tests for hidden errors it s also a good possibility to make this tests more robust they are still failing once in a while | 1 |
169,263 | 13,131,619,812 | IssuesEvent | 2020-08-06 17:21:06 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodesยทjs - Monitoring app Elasticsearch nodes listing with only online nodes "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info" | failed-test test-cloud | **Version: 7.9.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodesยทjs**
**Stack Trace:**
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="clusterItemContainerElasticsearch"] [data-test-subj="esNumberOfNodes"])
Wait timed out after 10023ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17
at process._tickCallback (internal/process/next_tick.js:68:7)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:28:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:13)
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/495/testReport/_ | 2.0 | [test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodesยทjs - Monitoring app Elasticsearch nodes listing with only online nodes "before all" hook for "should have an Elasticsearch Cluster Summary Status with correct info" - **Version: 7.9.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/elasticsearch/nodesยทjs**
**Stack Trace:**
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="clusterItemContainerElasticsearch"] [data-test-subj="esNumberOfNodes"])
Wait timed out after 10023ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17
at process._tickCallback (internal/process/next_tick.js:68:7)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:28:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:13)
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/495/testReport/_ | test | chrome x pack ui functional x pack test functional apps monitoring elasticsearch nodesยทjs monitoring app elasticsearch nodes listing with only online nodes before all hook for should have an elasticsearch cluster summary status with correct info version class chrome x pack ui functional x pack test functional apps monitoring elasticsearch nodesยทjs stack trace error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js at onfailure var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana test common services retry retry for success ts at retryforsuccess var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana test common services retry retry for success ts test report | 1 |
64,637 | 6,916,092,002 | IssuesEvent | 2017-11-29 00:36:34 | elastic/beats | https://api.github.com/repos/elastic/beats | opened | Jenkins fails on Windows builds because it can't start | :Testing | Sometimes windows builds fail during starting the the unit tests. The error looks as following:
```
00:01:12 runbld>>> runbld started
00:01:12 runbld>>> 1.5.7/18231c8c4ae2e83bf8e36dbaf339b31e8e3cbcfc
00:01:16 runbld>>> Debug logging enabled.
00:01:16 runbld>>> BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1498188341992/t/20171129000116-75E0107D
00:01:17 runbld>>> Adding vcs info for the latest commit: 859ca6431471f7aeba28f7595c0e56f76b15a41f
00:01:18 runbld>>> >>>>>>>>>>>> SCRIPT EXECUTION BEGIN >>>>>>>>>>>>
00:01:19 Building libbeat
00:01:35 Unit testing libbeat
00:02:12 exec : Unit test FAILURE
00:02:12 At C:\Users\jenkins\workspace\elastic+beats+pull-request+multijob-windows\beat\libbeat\label\windows\src\github.com\ela
00:02:12 stic\beats\dev-tools\jenkins_ci.ps1:50 char:1
00:02:12 + exec { Get-Content build/TEST-go-unit.out | go-junit-report.exe -set-exit-code | ...
00:02:12 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:02:12 + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
00:02:12 + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Exec
00:02:12
00:02:13 runbld>>> <<<<<<<<<<<< SCRIPT EXECUTION END <<<<<<<<<<<<
00:02:13 runbld>>> DURATION: 54834ms
00:02:13 runbld>>> STDOUT: 38 bytes
00:02:13 runbld>>> STDERR: 531 bytes
00:02:13 runbld>>> WRAPPED PROCESS: FAILURE (1)
00:02:13 runbld>>> Searching for build metadata in C:\Users\jenkins\workspace\elastic+beats+pull-request+multijob-windows\beat\libbeat\label\windows
00:02:18 runbld>>> Storing build metadata:
00:02:22 runbld>>> FAILURES: 1
00:02:23 runbld>>> BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1498188341992/t/20171129000116-75E0107D
```
An example of such a failed build can be found here: https://beats-ci.elastic.co/job/elastic+beats+pull-request+multijob-windows/2301/beat=libbeat,label=windows/console
@andrewkroh @exekias @elasticdog ideas on how to fix it more then welcome. | 1.0 | Jenkins fails on Windows builds because it can't start - Sometimes windows builds fail during starting the the unit tests. The error looks as following:
```
00:01:12 runbld>>> runbld started
00:01:12 runbld>>> 1.5.7/18231c8c4ae2e83bf8e36dbaf339b31e8e3cbcfc
00:01:16 runbld>>> Debug logging enabled.
00:01:16 runbld>>> BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1498188341992/t/20171129000116-75E0107D
00:01:17 runbld>>> Adding vcs info for the latest commit: 859ca6431471f7aeba28f7595c0e56f76b15a41f
00:01:18 runbld>>> >>>>>>>>>>>> SCRIPT EXECUTION BEGIN >>>>>>>>>>>>
00:01:19 Building libbeat
00:01:35 Unit testing libbeat
00:02:12 exec : Unit test FAILURE
00:02:12 At C:\Users\jenkins\workspace\elastic+beats+pull-request+multijob-windows\beat\libbeat\label\windows\src\github.com\ela
00:02:12 stic\beats\dev-tools\jenkins_ci.ps1:50 char:1
00:02:12 + exec { Get-Content build/TEST-go-unit.out | go-junit-report.exe -set-exit-code | ...
00:02:12 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:02:12 + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
00:02:12 + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Exec
00:02:12
00:02:13 runbld>>> <<<<<<<<<<<< SCRIPT EXECUTION END <<<<<<<<<<<<
00:02:13 runbld>>> DURATION: 54834ms
00:02:13 runbld>>> STDOUT: 38 bytes
00:02:13 runbld>>> STDERR: 531 bytes
00:02:13 runbld>>> WRAPPED PROCESS: FAILURE (1)
00:02:13 runbld>>> Searching for build metadata in C:\Users\jenkins\workspace\elastic+beats+pull-request+multijob-windows\beat\libbeat\label\windows
00:02:18 runbld>>> Storing build metadata:
00:02:22 runbld>>> FAILURES: 1
00:02:23 runbld>>> BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1498188341992/t/20171129000116-75E0107D
```
An example of such a failed build can be found here: https://beats-ci.elastic.co/job/elastic+beats+pull-request+multijob-windows/2301/beat=libbeat,label=windows/console
@andrewkroh @exekias @elasticdog ideas on how to fix it more then welcome. | test | jenkins fails on windows builds because it can t start sometimes windows builds fail during starting the the unit tests the error looks as following runbld runbld started runbld runbld debug logging enabled runbld build runbld adding vcs info for the latest commit runbld script execution begin building libbeat unit testing libbeat exec unit test failure at c users jenkins workspace elastic beats pull request multijob windows beat libbeat label windows src github com ela stic beats dev tools jenkins ci char exec get content build test go unit out go junit report exe set exit code categoryinfo notspecified writeerrorexception fullyqualifiederrorid microsoft powershell commands writeerrorexception exec runbld script execution end runbld duration runbld stdout bytes runbld stderr bytes runbld wrapped process failure runbld searching for build metadata in c users jenkins workspace elastic beats pull request multijob windows beat libbeat label windows runbld storing build metadata runbld failures runbld build an example of such a failed build can be found here andrewkroh exekias elasticdog ideas on how to fix it more then welcome | 1 |
198,241 | 14,969,490,736 | IssuesEvent | 2021-01-27 18:13:28 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | opened | Search smoke test failing | search smoke-test-failure | https://dev.azure.com/monacotools/Monaco/_build/results?buildId=102321&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=b4255493-5d6b-5e5c-80d7-ebab0307fbf4
1) VSCode Smoke Tests (Web)
VSCode Smoke Tests (Web)
Search
dismisses result & checks for correct result number:
Error: Timeout: get text content '.search-view .messages .message>span' after 20 seconds.
at /Users/runner/work/1/s/test/automation/out/code.js:181:23
at Generator.next (<anonymous>)
at fulfilled (/Users/runner/work/1/s/test/automation/out/code.js:9:58)
| 1.0 | Search smoke test failing - https://dev.azure.com/monacotools/Monaco/_build/results?buildId=102321&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=b4255493-5d6b-5e5c-80d7-ebab0307fbf4
1) VSCode Smoke Tests (Web)
VSCode Smoke Tests (Web)
Search
dismisses result & checks for correct result number:
Error: Timeout: get text content '.search-view .messages .message>span' after 20 seconds.
at /Users/runner/work/1/s/test/automation/out/code.js:181:23
at Generator.next (<anonymous>)
at fulfilled (/Users/runner/work/1/s/test/automation/out/code.js:9:58)
| test | search smoke test failing vscode smoke tests web vscode smoke tests web search dismisses result checks for correct result number error timeout get text content search view messages message span after seconds at users runner work s test automation out code js at generator next at fulfilled users runner work s test automation out code js | 1 |
403,250 | 27,407,271,361 | IssuesEvent | 2023-03-01 08:00:29 | allure-framework/allure-python | https://api.github.com/repos/allure-framework/allure-python | closed | please add an example for the exclusion of tests' parameters from the calculation of historyID | type:documentation | **Is your feature request related to a problem? Please describe.**
- to describe the usage of parameters in python based test frameworks
- there is a feature allowing excluding some parameters from historyID calculation which useful but not well documented
**Describe the solution you'd like**
- add examples of parameters usage to readme.md
- add examples of excluding parameters from the historyID calc to readme.md
see [this one as a reference](https://github.com/allure-framework/allure-js/blob/master/packages/allure-playwright/README.md)
**Describe alternatives you've considered**
- add examples of parameters usage to readme.md
- add examples of excluding parameters from the historyID calc to readme.md
**Additional context**
N/A
in future add these to the Allure report documentation
| 1.0 | please add an example for the exclusion of tests' parameters from the calculation of historyID - **Is your feature request related to a problem? Please describe.**
- to describe the usage of parameters in python based test frameworks
- there is a feature allowing excluding some parameters from historyID calculation which useful but not well documented
**Describe the solution you'd like**
- add examples of parameters usage to readme.md
- add examples of excluding parameters from the historyID calc to readme.md
see [this one as a reference](https://github.com/allure-framework/allure-js/blob/master/packages/allure-playwright/README.md)
**Describe alternatives you've considered**
- add examples of parameters usage to readme.md
- add examples of excluding parameters from the historyID calc to readme.md
**Additional context**
N/A
in future add these to the Allure report documentation
| non_test | please add an example for the exclusion of tests parameters from the calculation of historyid is your feature request related to a problem please describe to describe the usage of parameters in python based test frameworks there is a feature allowing excluding some parameters from historyid calculation which useful but not well documented describe the solution you d like add examples of parameters usage to readme md add examples of excluding parameters from the historyid calc to readme md see describe alternatives you ve considered add examples of parameters usage to readme md add examples of excluding parameters from the historyid calc to readme md additional context n a in future add these to the allure report documentation | 0 |
170,619 | 26,990,136,141 | IssuesEvent | 2023-02-09 19:10:57 | dailyAuction/project_daily_auction | https://api.github.com/repos/dailyAuction/project_daily_auction | closed | [FE] ๋ฑ๋ก ๊ฒฝ๋งค ํ์ด์ง ํผ๋ธ๋ฆฌ์ฑ | Front Design | ## Description
<img width="136" alt="แแ
ฎแแ
ฆ 46" src="https://user-images.githubusercontent.com/107684690/217871029-25bba564-b8a3-4ecc-97a4-aa32298f9dc7.png">
## Todo
- [x] ํผ๋ธ๋ฆฌ์ฑ
## ETC
- ์ฃผ์ ์ฌํญ ํน์ ๊ธฐํ ๋ด์ฉ
| 1.0 | [FE] ๋ฑ๋ก ๊ฒฝ๋งค ํ์ด์ง ํผ๋ธ๋ฆฌ์ฑ - ## Description
<img width="136" alt="แแ
ฎแแ
ฆ 46" src="https://user-images.githubusercontent.com/107684690/217871029-25bba564-b8a3-4ecc-97a4-aa32298f9dc7.png">
## Todo
- [x] ํผ๋ธ๋ฆฌ์ฑ
## ETC
- ์ฃผ์ ์ฌํญ ํน์ ๊ธฐํ ๋ด์ฉ
| non_test | ๋ฑ๋ก ๊ฒฝ๋งค ํ์ด์ง ํผ๋ธ๋ฆฌ์ฑ description img width alt แแ
ฎแแ
ฆ src todo ํผ๋ธ๋ฆฌ์ฑ etc ์ฃผ์ ์ฌํญ ํน์ ๊ธฐํ ๋ด์ฉ | 0 |
233,819 | 25,778,998,085 | IssuesEvent | 2022-12-09 14:24:07 | apache/pulsar | https://api.github.com/repos/apache/pulsar | closed | Function worker fails to be authenticated when TLS authentication is enabled in Pulsar standalone | type/bug component/function component/security lifecycle/stale | **Describe the bug**
When TLS authentication is enabled in Pulsar 2.4.1 and 2.4.0 and I start Pulsar standalone cluster, it fails to start
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the Pulsar 2.4.1 TLS encryption and authentication documentation to sign both client and broker certificate.
2. Set below config in Standalone.conf
brokerServicePort=6650
brokerServicePortTls=6651
tlsCertRefreshCheckDurationSec=300
tlsCertificateFilePath=/path/to/broker.cert.pem
tlsKeyFilePath=/path/to/broker.key-pk8.pem
tlsTrustCertsFilePath=/path/to/ca.cert.pem
authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls
authorizationEnabled=true
authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
authorizationAllowWildcardsMatching=false
superUserRoles=admin
brokerClientTlsEnabled=true
brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls
brokerClientAuthenticationParameters=tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem
brokerClientTrustCertsFilePath=/path/to/ca.cert.pem
3. In function_worker.yml
pulsarServiceUrl: pulsar+ssl://localhost:6651
pulsarWebServiceUrl: https://localhost:8443
4. Run bin/pulsar standalone. See error below and Pulsar standalone cluster will not start up.
19:00:17.861 [pulsar-io-50-1] WARN org.apache.pulsar.broker.service.ServerCnx - [/10.101.2.121:55710] Unable to authenticate
javax.naming.AuthenticationException: Client unable to authenticate with TLS certificate
at org.apache.pulsar.broker.authentication.AuthenticationProviderTls.authenticate(AuthenticationProviderTls.java:86) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.authentication.OneStageAuthenticationState.<init>(OneStageAuthenticationState.java:46) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.authentication.AuthenticationProvider.newAuthState(AuthenticationProvider.java:76) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.service.ServerCnx.handleConnect(ServerCnx.java:549) [org.apache.pulsar-pulsar-broker-2.4.1.jar:2.4.1]
at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:143) [org.apache.pulsar-pulsar-common-2.4.1.jar:2.4.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231]
**Expected behavior**
Run bin/pulsar standalone will start Pulsar standalone cluster without any exception. Client shall be able to connect to pulsar via TLS authentication.
**Screenshots**
<img width="1440" alt="Screenshot 2019-11-05 at 18 51 26" src="https://user-images.githubusercontent.com/40153224/68237683-88e58f80-ffff-11e9-8b41-9a5e80043e4e.png">
**Desktop (please complete the following information):**
- OS: macOS and Linux
**Additional context**
There is a walk around for this issue. Add useTls to true in function_worker.yml will enable Pulsar standalone to start. I find this field is depreciated in the recent releases but it is still used to set up function worker configuration.
| True | Function worker fails to be authenticated when TLS authentication is enabled in Pulsar standalone - **Describe the bug**
When TLS authentication is enabled in Pulsar 2.4.1 and 2.4.0 and I start Pulsar standalone cluster, it fails to start
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the Pulsar 2.4.1 TLS encryption and authentication documentation to sign both client and broker certificate.
2. Set below config in Standalone.conf
brokerServicePort=6650
brokerServicePortTls=6651
tlsCertRefreshCheckDurationSec=300
tlsCertificateFilePath=/path/to/broker.cert.pem
tlsKeyFilePath=/path/to/broker.key-pk8.pem
tlsTrustCertsFilePath=/path/to/ca.cert.pem
authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls
authorizationEnabled=true
authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
authorizationAllowWildcardsMatching=false
superUserRoles=admin
brokerClientTlsEnabled=true
brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls
brokerClientAuthenticationParameters=tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem
brokerClientTrustCertsFilePath=/path/to/ca.cert.pem
3. In function_worker.yml
pulsarServiceUrl: pulsar+ssl://localhost:6651
pulsarWebServiceUrl: https://localhost:8443
4. Run bin/pulsar standalone. See error below and Pulsar standalone cluster will not start up.
19:00:17.861 [pulsar-io-50-1] WARN org.apache.pulsar.broker.service.ServerCnx - [/10.101.2.121:55710] Unable to authenticate
javax.naming.AuthenticationException: Client unable to authenticate with TLS certificate
at org.apache.pulsar.broker.authentication.AuthenticationProviderTls.authenticate(AuthenticationProviderTls.java:86) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.authentication.OneStageAuthenticationState.<init>(OneStageAuthenticationState.java:46) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.authentication.AuthenticationProvider.newAuthState(AuthenticationProvider.java:76) ~[org.apache.pulsar-pulsar-broker-common-2.4.1.jar:2.4.1]
at org.apache.pulsar.broker.service.ServerCnx.handleConnect(ServerCnx.java:549) [org.apache.pulsar-pulsar-broker-2.4.1.jar:2.4.1]
at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:143) [org.apache.pulsar-pulsar-common-2.4.1.jar:2.4.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231]
**Expected behavior**
Run bin/pulsar standalone will start Pulsar standalone cluster without any exception. Client shall be able to connect to pulsar via TLS authentication.
**Screenshots**
<img width="1440" alt="Screenshot 2019-11-05 at 18 51 26" src="https://user-images.githubusercontent.com/40153224/68237683-88e58f80-ffff-11e9-8b41-9a5e80043e4e.png">
**Desktop (please complete the following information):**
- OS: macOS and Linux
**Additional context**
There is a walk around for this issue. Add useTls to true in function_worker.yml will enable Pulsar standalone to start. I find this field is depreciated in the recent releases but it is still used to set up function worker configuration.
| non_test | function worker fails to be authenticated when tls authentication is enabled in pulsar standalone describe the bug when tls authentication is enabled in pulsar and and i start pulsar standalone cluster it fails to start to reproduce steps to reproduce the behavior follow the pulsar tls encryption and authentication documentation to sign both client and broker certificate set below config in standalone conf brokerserviceport brokerserviceporttls tlscertrefreshcheckdurationsec tlscertificatefilepath path to broker cert pem tlskeyfilepath path to broker key pem tlstrustcertsfilepath path to ca cert pem authenticationproviders org apache pulsar broker authentication authenticationprovidertls authorizationenabled true authorizationprovider org apache pulsar broker authorization pulsarauthorizationprovider authorizationallowwildcardsmatching false superuserroles admin brokerclienttlsenabled true brokerclientauthenticationplugin org apache pulsar client impl auth authenticationtls brokerclientauthenticationparameters tlscertfile path to admin cert pem tlskeyfile path to admin key pem brokerclienttrustcertsfilepath path to ca cert pem in function worker yml pulsarserviceurl pulsar ssl localhost pulsarwebserviceurl run bin pulsar standalone see error below and pulsar standalone cluster will not start up warn org apache pulsar broker service servercnx unable to authenticate javax naming authenticationexception client unable to authenticate with tls certificate at org apache pulsar broker authentication authenticationprovidertls authenticate authenticationprovidertls java at org apache pulsar broker authentication onestageauthenticationstate onestageauthenticationstate java at org apache pulsar broker authentication authenticationprovider newauthstate authenticationprovider java at org apache pulsar broker service servercnx handleconnect servercnx java at org apache pulsar common protocol pulsardecoder channelread pulsardecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java expected behavior run bin pulsar standalone will start pulsar standalone cluster without any exception client shall be able to connect to pulsar via tls authentication screenshots img width alt screenshot at src desktop please complete the following information os macos and linux additional context there is a walk around for this issue add usetls to true in function worker yml will enable pulsar standalone to start i find this field is depreciated in the recent releases but it is still used to set up function worker configuration | 0 |
682,130 | 23,333,745,466 | IssuesEvent | 2022-08-09 08:09:25 | redhat-developer/vscode-openshift-tools | https://api.github.com/repos/redhat-developer/vscode-openshift-tools | reopened | Support proxy-url from `~/.kube/config` | kind/enhancement priority/critical | **Environment**
VS Code version: 1.68.1
OS: win32
Extension version: 0.4.0
**Description**
I have setup `proxy-url` in `~/.kube/config`.
```
apiVersion: v1
clusters:
- cluster:
proxy-url: socks5://localhost:1080
```
Usage description: https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/#client-configuration
This works as it should with `oc` from command line.
```
~ $ oc version
Client Version: 4.10.0-0.okd-2022-06-10-131327
Server Version: 4.8.17
Kubernetes Version: v1.21.1+6438632
```
But the extension is not using this property. | 1.0 | Support proxy-url from `~/.kube/config` - **Environment**
VS Code version: 1.68.1
OS: win32
Extension version: 0.4.0
**Description**
I have setup `proxy-url` in `~/.kube/config`.
```
apiVersion: v1
clusters:
- cluster:
proxy-url: socks5://localhost:1080
```
Usage description: https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/#client-configuration
This works as it should with `oc` from command line.
```
~ $ oc version
Client Version: 4.10.0-0.okd-2022-06-10-131327
Server Version: 4.8.17
Kubernetes Version: v1.21.1+6438632
```
But the extension is not using this property. | non_test | support proxy url from kube config environment vs code version os extension version description i have setup proxy url in kube config apiversion clusters cluster proxy url localhost usage description this works as it should with oc from command line oc version client version okd server version kubernetes version but the extension is not using this property | 0 |
258,185 | 22,290,969,142 | IssuesEvent | 2022-06-12 11:03:09 | stacks-network/stacks-blockchain | https://api.github.com/repos/stacks-network/stacks-blockchain | closed | Mocknet pox endpoint not working | bug stale testnet | ## Describe the bug
`/v2/pox` endpoint not working when running in mocknet.
## Steps To Reproduce
Booting mocknet on develop 7ba919a52, running
`cargo run --release --bin=stacks-node --package=stacks-node -- mocknet`
Waited for it to start producing coinbase blocks.
`curl "http://127.0.0.1:20443/v2/pox"` Responds with 500 error
```
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Headers: origin, content-type
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Content-Length: 25
Content-Type: text/plain
Date: Wed, Apr 21 2021 11:55:56 GMT
Server: stacks/2.0
X-Request-Id: 2564431540
Failed to query peer info
```
cc/ @zone117x | 1.0 | Mocknet pox endpoint not working - ## Describe the bug
`/v2/pox` endpoint not working when running in mocknet.
## Steps To Reproduce
Booting mocknet on develop 7ba919a52, running
`cargo run --release --bin=stacks-node --package=stacks-node -- mocknet`
Waited for it to start producing coinbase blocks.
`curl "http://127.0.0.1:20443/v2/pox"` Responds with 500 error
```
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Headers: origin, content-type
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Origin: *
Content-Length: 25
Content-Type: text/plain
Date: Wed, Apr 21 2021 11:55:56 GMT
Server: stacks/2.0
X-Request-Id: 2564431540
Failed to query peer info
```
cc/ @zone117x | test | mocknet pox endpoint not working describe the bug pox endpoint not working when running in mocknet steps to reproduce booting mocknet on develop running cargo run release bin stacks node package stacks node mocknet waited for it to start producing coinbase blocks curl responds with error http internal server error access control allow headers origin content type access control allow methods post get options access control allow origin content length content type text plain date wed apr gmt server stacks x request id failed to query peer info cc | 1 |
403,124 | 27,401,371,434 | IssuesEvent | 2023-03-01 01:05:47 | gravitational/teleport | https://api.github.com/repos/gravitational/teleport | opened | Automatic Upgrades Documentation | documentation | Teleport documentation should be updated to reflect the following.
* When using the `Cloud` selector, all references to download links should switch to downloads from `cloud/stable`.
* Create a dedicated page within documentation that outlines how automatic upgrades work at a high level.
* It should also contain a "Danger Zone" block that explain how you can manually "opt-out" of automatic upgrades and what the responsibilities for the customer are.
* It should also contain a a block that explains how to manually recover from a bad release. For example, if 12.2.2 is broken and the customer wants to manually roll back to 12.2.1, what should they do?
In addition, work with @jimbishopp and @evanfreed to create a runbook on what Cloud should do if a release is broken. For example, let's say 12.2.2 is broken what do we do? The answer is update `/version` to point to `12.2.1` and then work to get 12.2.3 out because we don't do rollbacks. Explain why we don't do rollbacks. | 1.0 | Automatic Upgrades Documentation - Teleport documentation should be updated to reflect the following.
* When using the `Cloud` selector, all references to download links should switch to downloads from `cloud/stable`.
* Create a dedicated page within documentation that outlines how automatic upgrades work at a high level.
* It should also contain a "Danger Zone" block that explain how you can manually "opt-out" of automatic upgrades and what the responsibilities for the customer are.
* It should also contain a a block that explains how to manually recover from a bad release. For example, if 12.2.2 is broken and the customer wants to manually roll back to 12.2.1, what should they do?
In addition, work with @jimbishopp and @evanfreed to create a runbook on what Cloud should do if a release is broken. For example, let's say 12.2.2 is broken what do we do? The answer is update `/version` to point to `12.2.1` and then work to get 12.2.3 out because we don't do rollbacks. Explain why we don't do rollbacks. | non_test | automatic upgrades documentation teleport documentation should be updated to reflect the following when using the cloud selector all references to download links should switch to downloads from cloud stable create a dedicated page within documentation that outlines how automatic upgrades work at a high level it should also contain a danger zone block that explain how you can manually opt out of automatic upgrades and what the responsibilities for the customer are it should also contain a a block that explains how to manually recover from a bad release for example if is broken and the customer wants to manually roll back to what should they do in addition work with jimbishopp and evanfreed to create a runbook on what cloud should do if a release is broken for example let s say is broken what do we do the answer is update version to point to and then work to get out because we don t do rollbacks explain why we don t do rollbacks | 0 |
187,753 | 6,760,898,557 | IssuesEvent | 2017-10-24 22:27:47 | 18F/web-design-standards | https://api.github.com/repos/18F/web-design-standards | closed | [Mailing address] Arrow disappears on state dropdown after autofill | [Priority] Minor [Skill] Front end [Status] Help wanted [Type] Bug | On Chrome, when I autofill my address the arrow on the state dropdown disappears.

| 1.0 | [Mailing address] Arrow disappears on state dropdown after autofill - On Chrome, when I autofill my address the arrow on the state dropdown disappears.

| non_test | arrow disappears on state dropdown after autofill on chrome when i autofill my address the arrow on the state dropdown disappears | 0 |
421,872 | 28,363,123,544 | IssuesEvent | 2023-04-12 12:14:07 | Telefonica/mistica-design | https://api.github.com/repos/Telefonica/mistica-design | opened | Image border | library: mobile library: desktop category: documentation platform: ios platform: android platform: web component: image | We need to include the possibility to show border for Image to allow stack images in StackingGroup
- [ ] Update specs
- [ ] Design documentation
- [ ] Add to mobile library
- [ ] Add to desktop library
- [ ] Add to UI Kit
- [ ] Android ticket
- [ ] iOS ticket
- [ ] Web ticket | 1.0 | Image border - We need to include the possibility to show border for Image to allow stack images in StackingGroup
- [ ] Update specs
- [ ] Design documentation
- [ ] Add to mobile library
- [ ] Add to desktop library
- [ ] Add to UI Kit
- [ ] Android ticket
- [ ] iOS ticket
- [ ] Web ticket | non_test | image border we need to include the possibility to show border for image to allow stack images in stackinggroup update specs design documentation add to mobile library add to desktop library add to ui kit android ticket ios ticket web ticket | 0 |
300,887 | 26,001,505,025 | IssuesEvent | 2022-12-20 15:40:05 | ARUP-CAS/aiscr-webamcr | https://api.github.com/repos/ARUP-CAS/aiscr-webamcr | closed | Modul Samostatnรฉ akce | enhancement map GUI TESTED | Jednรก se o ekvivalent projektovรฝch akcรญ (http://192.168.254.30:8080/arch-z/akce/detail/ a souvisejรญcรญ view) s nรกsledujรญcรญmi rozdรญly:
- [x] - Samostatnรก akce existuje bez vazby na projekt (rozliลกit lze podle โNโ v poli โtypโ)
- [x] - Mรก samostatnou poloลพku โSamostatnรฉ akceโ na รบvodnรญ obrazovce (zelenรก - #247e4b) a souvisejรญcรญ menu, s volbami:
- Zapsat akci (otevลe obdobu http://192.168.254.30:8080/arch-z/akce/zapsat/)
- Vybrat akce (vรฝbฤr umoลพnรญ vyhledat i projektovรฉ akce v pลรญpadฤ patลiฤnรฉ volby ve filtraci; https://docs.google.com/spreadsheets/d/1Ms3WEEfARICKbUr25_vGD7ymmwPJEaMw8WbcSB4UdwM/edit#gid=1110658884)
- Moje akce (pouze podmnoลพina vรฝbฤru)
- Archivovat akce (pouze podmnoลพina vรฝbฤru)
- [x] - Formulรกล detail obsahuje navรญc:
- pole Odloลพenรก NZ - pokud je โAnoโ, lze odeslat bez dokumentu typu โnรกlezovรก zprรกvaโ (funguje obdobnฤ jako pole โOdeslat ZAA jako NZโ) - _Teoreticky lze ลeลกit i volbou v rรกmci odeslรกnรญ akce (podle toho by se รบdaj doplnil do DB) - nebylo by pak nutnรฉ upravovat formulรกล. Informace, ลพe jde o akci s odloลพenou NZ, bychom doplnili na nฤjakรฉ jinรฉ smysluplnรฉ mรญsto._
- moลพnost mฤnit Specifikaci data + souvisejรญcรญ รบpravy chovรกnรญ polรญ pro jeho zadรกnรญ (viz https://github.com/ARUP-CAS/aiscr-webamcr/issues/36)
- [x] - Pลi volbฤ typu prvnรญ DJ je moลพnรฉ zadat typ โKatastrรกlnรญ รบzemรญโ (pลฏvodnฤ v AMฤR nazvรกno โNelokalizovanรก akceโ - pouze pลejmenovat). Pokud se tak stane:
- k DJ je automaticky pลipojen PIAN katastru podle uลพivatelskรฉ volby (nutno dohodnout, jak pลesnฤ uลพivatel vybere katastr, kdyลพ pro akce s PIAN jej budeme stanovovat automaticky a pole ve formulรกลi je neaktivnรญ)
- nelze pลidรกvat dalลกรญ DJ
- [x] - Umoลพnit archivรกลi snadno pลevรฉst projektovou akci na samostatnou a naopak (zadรกnรญm ฤรญsla projektu, ke kterรฉmu se mรก pลipojit). Pลi tom obsah zลฏstane stejnรฝ, pouze se zmฤnรญ identifikรกtory a operace se zapรญลกe do historie zรกznamu.
- [x] - Vznikรก s doฤasnรฝm ID (โX-โ na zaฤรกtku), trvalรฉ ID je pลidฤleno pลi prvnรญ archivaci (pลi jakรฉkoli dalลกรญ zmฤnฤ stavu se jiลพ nemฤnรญ). Struktura ident_cely odpovรญdรก schรฉmatu:
[region]-[ฤรญslice 9][ลกestimรญstnรฉ poลadovรฉ ฤรญslo][pรญsmeno "A"]; celkem 10 znakลฏ
Pลรญklad: C-9000001A
Poลadovรฉ ฤรญslo 1-999999, k nฤmuลพ se pลiฤรญtรก 9,000,000.
Mรก verzi s provizornรญm ID: X-C-9123456A
Tvorbu ลady je tลeba oลกetลit obdobnรฝm zpลฏsobem jako napล. pro projekty (viz https://github.com/ARUP-CAS/aiscr-webamcr/wiki/Pridelovani-ident_cely) | 1.0 | Modul Samostatnรฉ akce - Jednรก se o ekvivalent projektovรฝch akcรญ (http://192.168.254.30:8080/arch-z/akce/detail/ a souvisejรญcรญ view) s nรกsledujรญcรญmi rozdรญly:
- [x] - Samostatnรก akce existuje bez vazby na projekt (rozliลกit lze podle โNโ v poli โtypโ)
- [x] - Mรก samostatnou poloลพku โSamostatnรฉ akceโ na รบvodnรญ obrazovce (zelenรก - #247e4b) a souvisejรญcรญ menu, s volbami:
- Zapsat akci (otevลe obdobu http://192.168.254.30:8080/arch-z/akce/zapsat/)
- Vybrat akce (vรฝbฤr umoลพnรญ vyhledat i projektovรฉ akce v pลรญpadฤ patลiฤnรฉ volby ve filtraci; https://docs.google.com/spreadsheets/d/1Ms3WEEfARICKbUr25_vGD7ymmwPJEaMw8WbcSB4UdwM/edit#gid=1110658884)
- Moje akce (pouze podmnoลพina vรฝbฤru)
- Archivovat akce (pouze podmnoลพina vรฝbฤru)
- [x] - Formulรกล detail obsahuje navรญc:
- pole Odloลพenรก NZ - pokud je โAnoโ, lze odeslat bez dokumentu typu โnรกlezovรก zprรกvaโ (funguje obdobnฤ jako pole โOdeslat ZAA jako NZโ) - _Teoreticky lze ลeลกit i volbou v rรกmci odeslรกnรญ akce (podle toho by se รบdaj doplnil do DB) - nebylo by pak nutnรฉ upravovat formulรกล. Informace, ลพe jde o akci s odloลพenou NZ, bychom doplnili na nฤjakรฉ jinรฉ smysluplnรฉ mรญsto._
- moลพnost mฤnit Specifikaci data + souvisejรญcรญ รบpravy chovรกnรญ polรญ pro jeho zadรกnรญ (viz https://github.com/ARUP-CAS/aiscr-webamcr/issues/36)
- [x] - Pลi volbฤ typu prvnรญ DJ je moลพnรฉ zadat typ โKatastrรกlnรญ รบzemรญโ (pลฏvodnฤ v AMฤR nazvรกno โNelokalizovanรก akceโ - pouze pลejmenovat). Pokud se tak stane:
- k DJ je automaticky pลipojen PIAN katastru podle uลพivatelskรฉ volby (nutno dohodnout, jak pลesnฤ uลพivatel vybere katastr, kdyลพ pro akce s PIAN jej budeme stanovovat automaticky a pole ve formulรกลi je neaktivnรญ)
- nelze pลidรกvat dalลกรญ DJ
- [x] - Umoลพnit archivรกลi snadno pลevรฉst projektovou akci na samostatnou a naopak (zadรกnรญm ฤรญsla projektu, ke kterรฉmu se mรก pลipojit). Pลi tom obsah zลฏstane stejnรฝ, pouze se zmฤnรญ identifikรกtory a operace se zapรญลกe do historie zรกznamu.
- [x] - Vznikรก s doฤasnรฝm ID (โX-โ na zaฤรกtku), trvalรฉ ID je pลidฤleno pลi prvnรญ archivaci (pลi jakรฉkoli dalลกรญ zmฤnฤ stavu se jiลพ nemฤnรญ). Struktura ident_cely odpovรญdรก schรฉmatu:
[region]-[ฤรญslice 9][ลกestimรญstnรฉ poลadovรฉ ฤรญslo][pรญsmeno "A"]; celkem 10 znakลฏ
Pลรญklad: C-9000001A
Poลadovรฉ ฤรญslo 1-999999, k nฤmuลพ se pลiฤรญtรก 9,000,000.
Mรก verzi s provizornรญm ID: X-C-9123456A
Tvorbu ลady je tลeba oลกetลit obdobnรฝm zpลฏsobem jako napล. pro projekty (viz https://github.com/ARUP-CAS/aiscr-webamcr/wiki/Pridelovani-ident_cely) | test | modul samostatnรฉ akce jednรก se o ekvivalent projektovรฝch akcรญ a souvisejรญcรญ view s nรกsledujรญcรญmi rozdรญly samostatnรก akce existuje bez vazby na projekt rozliลกit lze podle โnโ v poli โtypโ mรก samostatnou poloลพku โsamostatnรฉ akceโ na รบvodnรญ obrazovce zelenรก a souvisejรญcรญ menu s volbami zapsat akci otevลe obdobu vybrat akce vรฝbฤr umoลพnรญ vyhledat i projektovรฉ akce v pลรญpadฤ patลiฤnรฉ volby ve filtraci moje akce pouze podmnoลพina vรฝbฤru archivovat akce pouze podmnoลพina vรฝbฤru formulรกล detail obsahuje navรญc pole odloลพenรก nz pokud je โanoโ lze odeslat bez dokumentu typu โnรกlezovรก zprรกvaโ funguje obdobnฤ jako pole โodeslat zaa jako nzโ teoreticky lze ลeลกit i volbou v rรกmci odeslรกnรญ akce podle toho by se รบdaj doplnil do db nebylo by pak nutnรฉ upravovat formulรกล informace ลพe jde o akci s odloลพenou nz bychom doplnili na nฤjakรฉ jinรฉ smysluplnรฉ mรญsto moลพnost mฤnit specifikaci data souvisejรญcรญ รบpravy chovรกnรญ polรญ pro jeho zadรกnรญ viz pลi volbฤ typu prvnรญ dj je moลพnรฉ zadat typ โkatastrรกlnรญ รบzemรญโ pลฏvodnฤ v amฤr nazvรกno โnelokalizovanรก akceโ pouze pลejmenovat pokud se tak stane k dj je automaticky pลipojen pian katastru podle uลพivatelskรฉ volby nutno dohodnout jak pลesnฤ uลพivatel vybere katastr kdyลพ pro akce s pian jej budeme stanovovat automaticky a pole ve formulรกลi je neaktivnรญ nelze pลidรกvat dalลกรญ dj umoลพnit archivรกลi snadno pลevรฉst projektovou akci na samostatnou a naopak zadรกnรญm ฤรญsla projektu ke kterรฉmu se mรก pลipojit pลi tom obsah zลฏstane stejnรฝ pouze se zmฤnรญ identifikรกtory a operace se zapรญลกe do historie zรกznamu vznikรก s doฤasnรฝm id โx โ na zaฤรกtku trvalรฉ id je pลidฤleno pลi prvnรญ archivaci pลi jakรฉkoli dalลกรญ zmฤnฤ stavu se jiลพ nemฤnรญ struktura ident cely odpovรญdรก schรฉmatu celkem znakลฏ pลรญklad c poลadovรฉ ฤรญslo k nฤmuลพ se pลiฤรญtรก mรก verzi s provizornรญm id x c tvorbu ลady je tลeba oลกetลit obdobnรฝm zpลฏsobem jako napล pro projekty viz | 1 |
141,488 | 11,422,687,818 | IssuesEvent | 2020-02-03 14:38:39 | inception-project/inception | https://api.github.com/repos/inception-project/inception | closed | Fix LAPPSGrid tests | Critical Module: Recommender Testing ๐Bug | **Describe the bug**
Some Tests fail due to URL timeouts.
**To Reproduce**
See Jenkis Console Log e.g.
```
[ERROR] testNerConformity(LappsGridService{name='edu.cmu.lti.oaqa.lapps.LingpipeNER - 1.1.1-SNAPSHOT', description='Lingpipe Named Entity Recognizer with model "English News: MUC-6"', url='http://vassar.lappsgrid.org/invoker/anc:lingpipe.ner_1.1.1-SNAPSHOT'}) [1](de.tudarmstadt.ukp.inception.recommendation.imls.lapps.LappsGridRecommenderConformityTest) Time elapsed: 31.113 s <<< ERROR!
```
**Expected behavior**
Don't run test with url unreachable. | 1.0 | Fix LAPPSGrid tests - **Describe the bug**
Some Tests fail due to URL timeouts.
**To Reproduce**
See Jenkis Console Log e.g.
```
[ERROR] testNerConformity(LappsGridService{name='edu.cmu.lti.oaqa.lapps.LingpipeNER - 1.1.1-SNAPSHOT', description='Lingpipe Named Entity Recognizer with model "English News: MUC-6"', url='http://vassar.lappsgrid.org/invoker/anc:lingpipe.ner_1.1.1-SNAPSHOT'}) [1](de.tudarmstadt.ukp.inception.recommendation.imls.lapps.LappsGridRecommenderConformityTest) Time elapsed: 31.113 s <<< ERROR!
```
**Expected behavior**
Don't run test with url unreachable. | test | fix lappsgrid tests describe the bug some tests fail due to url timeouts to reproduce see jenkis console log e g testnerconformity lappsgridservice name edu cmu lti oaqa lapps lingpipener snapshot description lingpipe named entity recognizer with model english news muc url de tudarmstadt ukp inception recommendation imls lapps lappsgridrecommenderconformitytest time elapsed s error expected behavior don t run test with url unreachable | 1 |
136,362 | 30,534,371,171 | IssuesEvent | 2023-07-19 16:13:52 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | opened | Code Table Request - New Formation = Connasauga Formation | Priority-High (Needed for work) Function-CodeTables | ## Initial Request
### Goal
_Describe what you're trying to accomplish. This is the only necessary step to start this process. The Committee is available to assist with all other steps. Please clearly indicate any uncertainty or desired guidance if you proceed beyond this step._
Add a useful formation for incoming NHSM data
### Context
_Describe why this new value is necessary and existing values are not._
It isn't in the code table, but it is a legit formation
### Table
_Code Tables are http://arctos.database.museum/info/ctDocumentation.cfm. Link to the specific table or value. This may involve multiple tables and will control datatype for Attributes. OtherID requests require BaseURL (and example) or explanation. Please ask for assistance if unsure._
https://arctos.database.museum/info/ctDocumentation.cfm?table=ctlithostratigraphic_formation
### Proposed Value
_Proposed new value. This should be clear and compatible with similar values in the relevant table and across Arctos._
Conasauga Formation
### Proposed Definition
_Clear, complete, non-collection-type-specific **functional** definition of the value. Avoid discipline-specific terminology if possible, include parenthetically if unavoidable._
[Macrostrat](https://macrostrat.org/sift/#/strat_name/440)
### Collection type
_Some code tables contain collection-type-specific values. ``collection_cde`` may be found from https://arctos.database.museum/home.cfm_
N/A
### Attribute Extras
#### Attribute data type
_If the request is for an attribute, what values will be allowed?
free-text, categorical, or number+units depending upon the attribute (TBA)_
N/A
#### Attribute controlled values
_If the values are categorical (to be controlled by a code table), add a link to the appropriate code table. If a new table or set of values is needed, please elaborate._
N/A
#### Attribute units
_if numerical values should be accompanied by units, provide a link to the appropriate units table._
N/A
### Priority
_Please describe the urgency and/or choose a priority-label to the right. You should expect a response within two working days, and may utilize [Arctos Contacts](https://arctosdb.org/contacts/) if you feel response is lacking._
### Example Data
_Requests with clarifying sample data are generally much easier to understand and prioritize. Please attach or link to any representative data, in any form or format, which might help clarify the request._
### Available for Public View
_Most data are by default publicly available. Describe any necessary access restrictions._
Yes
### Helpful Actions
- [ ] Add the issue to the [Code Table Management Project](https://github.com/ArctosDB/arctos/projects/13#card-31628184).
- [ ] Please reach out to anyone who might be affected by this change. Leave a comment or add this to the Committee agenda if you believe more focused conversation is necessary.
@ArctosDB/arctos-code-table-administrators
## Approval
_All of the following must be checked before this may proceed._
_The [How-To Document](https://handbook.arctosdb.org/how_to/How-To-Manage-Code-Table-Requests.html) should be followed. Pay particular attention to terminology (with emphasis on consistency) and documentation (with emphasis on functionality). **No person should act in multiple roles**; the submitter cannot also serve as a Code Table Administrator, for example._
- [ ] Code Table Administrator[1] - check and initial, comment, or thumbs-up to indicate that the request complies with the how-to documentation and has your approval
- [ ] Code Table Administrator[2] - check and initial, comment, or thumbs-up to indicate that the request complies with the how-to documentation and has your approval
- [ ] DBA - The request is functionally acceptable. The term is not a functional duplicate, and is compatible with existing data and code.
- [ ] DBA - Appropriate code or handlers are in place as necessary. (ID_References, Media Relationships, Encumbrances, etc. require particular attention)
## Rejection
_If you believe this request should not proceed, explain why here. Suggest any changes that would make the change acceptable, alternate (usually existing) paths to the same goals, etc._
1. _Can a suitable solution be found here? If not, proceed to (2)_
2. _Can a suitable solution be found by Code Table Committee discussion? If not, proceed to (3)_
3. _Take the discussion to a monthly Arctos Working Group meeting for final resolution._
## Implementation
_Once all of the Approval Checklist is appropriately checked and there are no Rejection comments, or in special circumstances by decree of the Arctos Working Group, the change may be made._
- [ ] Review everything one last time. Ensure the How-To has been followed. Ensure all checks have been made by appropriate personnel.
- [ ] Add or revise the code table term/definition as described above. Ensure the URL of this Issue is included in the definition.
_Close this Issue._
_**DO NOT** modify Arctos Authorities in any way before all points in this Issue have been fully addressed; data loss may result._
## Special Exemptions
_In very specific cases and by prior approval of The Committee, the approval process may be skipped, and implementation requirements may be slightly altered. Please note here if you are proceeding under one of these use cases._
1. _Adding an existing term to additional collection types may proceed immediately and without discussion, but doing so may also subject users to future cleanup efforts. If time allows, please review the term and definition as part of this step._
2. _The Committee may grant special access on particular tables to particular users. This should be exercised with great caution only after several smooth test cases, and generally limited to "taxonomy-like" data such as International Commission on Stratigraphy terminology._
| 1.0 | Code Table Request - New Formation = Connasauga Formation - ## Initial Request
### Goal
_Describe what you're trying to accomplish. This is the only necessary step to start this process. The Committee is available to assist with all other steps. Please clearly indicate any uncertainty or desired guidance if you proceed beyond this step._
Add a useful formation for incoming NHSM data
### Context
_Describe why this new value is necessary and existing values are not._
It isn't in the code table, but it is a legit formation
### Table
_Code Tables are http://arctos.database.museum/info/ctDocumentation.cfm. Link to the specific table or value. This may involve multiple tables and will control datatype for Attributes. OtherID requests require BaseURL (and example) or explanation. Please ask for assistance if unsure._
https://arctos.database.museum/info/ctDocumentation.cfm?table=ctlithostratigraphic_formation
### Proposed Value
_Proposed new value. This should be clear and compatible with similar values in the relevant table and across Arctos._
Conasauga Formation
### Proposed Definition
_Clear, complete, non-collection-type-specific **functional** definition of the value. Avoid discipline-specific terminology if possible, include parenthetically if unavoidable._
[Macrostrat](https://macrostrat.org/sift/#/strat_name/440)
### Collection type
_Some code tables contain collection-type-specific values. ``collection_cde`` may be found from https://arctos.database.museum/home.cfm_
N/A
### Attribute Extras
#### Attribute data type
_If the request is for an attribute, what values will be allowed?
free-text, categorical, or number+units depending upon the attribute (TBA)_
N/A
#### Attribute controlled values
_If the values are categorical (to be controlled by a code table), add a link to the appropriate code table. If a new table or set of values is needed, please elaborate._
N/A
#### Attribute units
_if numerical values should be accompanied by units, provide a link to the appropriate units table._
N/A
### Priority
_Please describe the urgency and/or choose a priority-label to the right. You should expect a response within two working days, and may utilize [Arctos Contacts](https://arctosdb.org/contacts/) if you feel response is lacking._
### Example Data
_Requests with clarifying sample data are generally much easier to understand and prioritize. Please attach or link to any representative data, in any form or format, which might help clarify the request._
### Available for Public View
_Most data are by default publicly available. Describe any necessary access restrictions._
Yes
### Helpful Actions
- [ ] Add the issue to the [Code Table Management Project](https://github.com/ArctosDB/arctos/projects/13#card-31628184).
- [ ] Please reach out to anyone who might be affected by this change. Leave a comment or add this to the Committee agenda if you believe more focused conversation is necessary.
@ArctosDB/arctos-code-table-administrators
## Approval
_All of the following must be checked before this may proceed._
_The [How-To Document](https://handbook.arctosdb.org/how_to/How-To-Manage-Code-Table-Requests.html) should be followed. Pay particular attention to terminology (with emphasis on consistency) and documentation (with emphasis on functionality). **No person should act in multiple roles**; the submitter cannot also serve as a Code Table Administrator, for example._
- [ ] Code Table Administrator[1] - check and initial, comment, or thumbs-up to indicate that the request complies with the how-to documentation and has your approval
- [ ] Code Table Administrator[2] - check and initial, comment, or thumbs-up to indicate that the request complies with the how-to documentation and has your approval
- [ ] DBA - The request is functionally acceptable. The term is not a functional duplicate, and is compatible with existing data and code.
- [ ] DBA - Appropriate code or handlers are in place as necessary. (ID_References, Media Relationships, Encumbrances, etc. require particular attention)
## Rejection
_If you believe this request should not proceed, explain why here. Suggest any changes that would make the change acceptable, alternate (usually existing) paths to the same goals, etc._
1. _Can a suitable solution be found here? If not, proceed to (2)_
2. _Can a suitable solution be found by Code Table Committee discussion? If not, proceed to (3)_
3. _Take the discussion to a monthly Arctos Working Group meeting for final resolution._
## Implementation
_Once all of the Approval Checklist is appropriately checked and there are no Rejection comments, or in special circumstances by decree of the Arctos Working Group, the change may be made._
- [ ] Review everything one last time. Ensure the How-To has been followed. Ensure all checks have been made by appropriate personnel.
- [ ] Add or revise the code table term/definition as described above. Ensure the URL of this Issue is included in the definition.
_Close this Issue._
_**DO NOT** modify Arctos Authorities in any way before all points in this Issue have been fully addressed; data loss may result._
## Special Exemptions
_In very specific cases and by prior approval of The Committee, the approval process may be skipped, and implementation requirements may be slightly altered. Please note here if you are proceeding under one of these use cases._
1. _Adding an existing term to additional collection types may proceed immediately and without discussion, but doing so may also subject users to future cleanup efforts. If time allows, please review the term and definition as part of this step._
2. _The Committee may grant special access on particular tables to particular users. This should be exercised with great caution only after several smooth test cases, and generally limited to "taxonomy-like" data such as International Commission on Stratigraphy terminology._
| non_test | code table request new formation connasauga formation initial request goal describe what you re trying to accomplish this is the only necessary step to start this process the committee is available to assist with all other steps please clearly indicate any uncertainty or desired guidance if you proceed beyond this step add a useful formation for incoming nhsm data context describe why this new value is necessary and existing values are not it isn t in the code table but it is a legit formation table code tables are link to the specific table or value this may involve multiple tables and will control datatype for attributes otherid requests require baseurl and example or explanation please ask for assistance if unsure proposed value proposed new value this should be clear and compatible with similar values in the relevant table and across arctos conasauga formation proposed definition clear complete non collection type specific functional definition of the value avoid discipline specific terminology if possible include parenthetically if unavoidable collection type some code tables contain collection type specific values collection cde may be found from n a attribute extras attribute data type if the request is for an attribute what values will be allowed free text categorical or number units depending upon the attribute tba n a attribute controlled values if the values are categorical to be controlled by a code table add a link to the appropriate code table if a new table or set of values is needed please elaborate n a attribute units if numerical values should be accompanied by units provide a link to the appropriate units table n a priority please describe the urgency and or choose a priority label to the right you should expect a response within two working days and may utilize if you feel response is lacking example data requests with clarifying sample data are generally much easier to understand and prioritize please attach or link to any representative data in any form or format which might help clarify the request available for public view most data are by default publicly available describe any necessary access restrictions yes helpful actions add the issue to the please reach out to anyone who might be affected by this change leave a comment or add this to the committee agenda if you believe more focused conversation is necessary arctosdb arctos code table administrators approval all of the following must be checked before this may proceed the should be followed pay particular attention to terminology with emphasis on consistency and documentation with emphasis on functionality no person should act in multiple roles the submitter cannot also serve as a code table administrator for example code table administrator check and initial comment or thumbs up to indicate that the request complies with the how to documentation and has your approval code table administrator check and initial comment or thumbs up to indicate that the request complies with the how to documentation and has your approval dba the request is functionally acceptable the term is not a functional duplicate and is compatible with existing data and code dba appropriate code or handlers are in place as necessary id references media relationships encumbrances etc require particular attention rejection if you believe this request should not proceed explain why here suggest any changes that would make the change acceptable alternate usually existing paths to the same goals etc can a suitable solution be found here if not proceed to can a suitable solution be found by code table committee discussion if not proceed to take the discussion to a monthly arctos working group meeting for final resolution implementation once all of the approval checklist is appropriately checked and there are no rejection comments or in special circumstances by decree of the arctos working group the change may be made review everything one last time ensure the how to has been followed ensure all checks have been made by appropriate personnel add or revise the code table term definition as described above ensure the url of this issue is included in the definition close this issue do not modify arctos authorities in any way before all points in this issue have been fully addressed data loss may result special exemptions in very specific cases and by prior approval of the committee the approval process may be skipped and implementation requirements may be slightly altered please note here if you are proceeding under one of these use cases adding an existing term to additional collection types may proceed immediately and without discussion but doing so may also subject users to future cleanup efforts if time allows please review the term and definition as part of this step the committee may grant special access on particular tables to particular users this should be exercised with great caution only after several smooth test cases and generally limited to taxonomy like data such as international commission on stratigraphy terminology | 0 |
451,252 | 13,032,135,595 | IssuesEvent | 2020-07-28 03:21:01 | magento/pwa-studio | https://api.github.com/repos/magento/pwa-studio | reopened | [feature]: Unable to use my custom currency symbol | Priority: P3 Severity: S3 enhancement | <!--
Thank you for taking the time to report this issue!
GitHub Issues should only be created for problems/topics related to this project's codebase.
Before submitting this issue, please make sure you are complying with our Code of Conduct:
https://github.com/magento/pwa-studio/blob/develop/.github/CODE_OF_CONDUCT.md
Issues that do not comply with our Code of Conduct or do not contain enough information may be closed at the maintainers' discretion.
Feel free to remove this section before creating this issue.
-->
**Describe the bug**
When I tried to custom my currency symbol on frontend. It's working normally on Magento default but nothing change on PWA.
**To reproduce**
Steps to reproduce the behavior:
1. Go to **Magento admin** > **Stores** > **Currency** > **Currency Symbols**
2. Change currency symbol as you want (I tried รธ ) then Click to **Save currency symbols** + Clear cache
3. Go to Magento front page and see that applied
4. Refresh + clear browser cache on PWA url but nothing applied.
**Expected behavior**
Show my custom currency symbol on PWA Venia theme.
**Screenshots**
_My setting:_

_Display on Magento Luma:_

_Display on PWA Venia theme:_

**Please complete the following device information:**
- Device: Macbook Pro Mid 2014
- OS : MacOS 10.14.6
- Browser: Chrome, Safari
- Browser Version [e.g. 22]:
- Magento Version: 2.3.4
- PWA Studio Version: 6.0
- NPM version `npm -v`: 6.9.0
- Node Version `node -v`: v10.16.0
<!-- Complete the following sections to help us apply appropriate labels! -->
**Please let us know what packages this bug is in regards to:**
- [ ] `venia-concept`
- [ ] `venia-ui`
- [ ] `pwa-buildpack`
- [ ] `peregrine`
- [ ] `pwa-devdocs`
- [ ] `upward-js`
- [ ] `upward-spec`
- [ ] `create-pwa`
| 1.0 | [feature]: Unable to use my custom currency symbol - <!--
Thank you for taking the time to report this issue!
GitHub Issues should only be created for problems/topics related to this project's codebase.
Before submitting this issue, please make sure you are complying with our Code of Conduct:
https://github.com/magento/pwa-studio/blob/develop/.github/CODE_OF_CONDUCT.md
Issues that do not comply with our Code of Conduct or do not contain enough information may be closed at the maintainers' discretion.
Feel free to remove this section before creating this issue.
-->
**Describe the bug**
When I tried to custom my currency symbol on frontend. It's working normally on Magento default but nothing change on PWA.
**To reproduce**
Steps to reproduce the behavior:
1. Go to **Magento admin** > **Stores** > **Currency** > **Currency Symbols**
2. Change currency symbol as you want (I tried รธ ) then Click to **Save currency symbols** + Clear cache
3. Go to Magento front page and see that applied
4. Refresh + clear browser cache on PWA url but nothing applied.
**Expected behavior**
Show my custom currency symbol on PWA Venia theme.
**Screenshots**
_My setting:_

_Display on Magento Luma:_

_Display on PWA Venia theme:_

**Please complete the following device information:**
- Device: Macbook Pro Mid 2014
- OS : MacOS 10.14.6
- Browser: Chrome, Safari
- Browser Version [e.g. 22]:
- Magento Version: 2.3.4
- PWA Studio Version: 6.0
- NPM version `npm -v`: 6.9.0
- Node Version `node -v`: v10.16.0
<!-- Complete the following sections to help us apply appropriate labels! -->
**Please let us know what packages this bug is in regards to:**
- [ ] `venia-concept`
- [ ] `venia-ui`
- [ ] `pwa-buildpack`
- [ ] `peregrine`
- [ ] `pwa-devdocs`
- [ ] `upward-js`
- [ ] `upward-spec`
- [ ] `create-pwa`
| non_test | unable to use my custom currency symbol thank you for taking the time to report this issue github issues should only be created for problems topics related to this project s codebase before submitting this issue please make sure you are complying with our code of conduct issues that do not comply with our code of conduct or do not contain enough information may be closed at the maintainers discretion feel free to remove this section before creating this issue describe the bug when i tried to custom my currency symbol on frontend it s working normally on magento default but nothing change on pwa to reproduce steps to reproduce the behavior go to magento admin stores currency currency symbols change currency symbol as you want i tried รธ then click to save currency symbols clear cache go to magento front page and see that applied refresh clear browser cache on pwa url but nothing applied expected behavior show my custom currency symbol on pwa venia theme screenshots my setting display on magento luma display on pwa venia theme please complete the following device information device macbook pro mid os macos browser chrome safari browser version magento version pwa studio version npm version npm v node version node v please let us know what packages this bug is in regards to venia concept venia ui pwa buildpack peregrine pwa devdocs upward js upward spec create pwa | 0 |
264,508 | 20,023,131,997 | IssuesEvent | 2022-02-01 18:17:29 | Sanpele/websitePlaying | https://api.github.com/repos/Sanpele/websitePlaying | opened | Add Explanation / README | documentation | I don't explain at all what I am doing. This should immediately and aggressively be changed to allow someone viewing this page to understand
What Cool stuff is happening.
Funny Joke.
What I learned.
What questions / future paths to explore. | 1.0 | Add Explanation / README - I don't explain at all what I am doing. This should immediately and aggressively be changed to allow someone viewing this page to understand
What Cool stuff is happening.
Funny Joke.
What I learned.
What questions / future paths to explore. | non_test | add explanation readme i don t explain at all what i am doing this should immediately and aggressively be changed to allow someone viewing this page to understand what cool stuff is happening funny joke what i learned what questions future paths to explore | 0 |
374,570 | 11,092,476,432 | IssuesEvent | 2019-12-15 19:16:44 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Document (and test) that glob pattern wildcards like `*` can be escaped like `[*]` | beta 1 enhancement priority: low | Adding the `[...]` syntax to glob patterns in RF 3.1 (#2471) has caused some confusion like the recent #3399 and mkorpela/pabot#237. That syntax is useful and won't be removed, but it would be good to document how to match literal `[` and `]`, as well as the old wildcards `*` and `?`, in patterns like `[[]` and `[*]`. Should also add tests for this behavior to make sure it won't be accidentally removed in the future. | 1.0 | Document (and test) that glob pattern wildcards like `*` can be escaped like `[*]` - Adding the `[...]` syntax to glob patterns in RF 3.1 (#2471) has caused some confusion like the recent #3399 and mkorpela/pabot#237. That syntax is useful and won't be removed, but it would be good to document how to match literal `[` and `]`, as well as the old wildcards `*` and `?`, in patterns like `[[]` and `[*]`. Should also add tests for this behavior to make sure it won't be accidentally removed in the future. | non_test | document and test that glob pattern wildcards like can be escaped like adding the syntax to glob patterns in rf has caused some confusion like the recent and mkorpela pabot that syntax is useful and won t be removed but it would be good to document how to match literal as well as the old wildcards and in patterns like and should also add tests for this behavior to make sure it won t be accidentally removed in the future | 0 |
36,284 | 5,041,708,904 | IssuesEvent | 2016-12-19 11:18:20 | nkp-osf/VDK | https://api.github.com/repos/nkp-osf/VDK | closed | konkrรฉtnรญ exemplรกล v nabรญdce | K testovani | Bylo by vhodnรฉ, kdyby byl pลi nabรญzenรญ zvรฝraznฤn konkrรฉtnรญ exemplรกล - zvlรกลกลฅ u vรญcesvazkovรฝch titulลฏ, kde nemusรญ bรฝt zลejmรฉ, kterรฝ svazek je nabรญzen

| 1.0 | konkrรฉtnรญ exemplรกล v nabรญdce - Bylo by vhodnรฉ, kdyby byl pลi nabรญzenรญ zvรฝraznฤn konkrรฉtnรญ exemplรกล - zvlรกลกลฅ u vรญcesvazkovรฝch titulลฏ, kde nemusรญ bรฝt zลejmรฉ, kterรฝ svazek je nabรญzen

| test | konkrรฉtnรญ exemplรกล v nabรญdce bylo by vhodnรฉ kdyby byl pลi nabรญzenรญ zvรฝraznฤn konkrรฉtnรญ exemplรกล zvlรกลกลฅ u vรญcesvazkovรฝch titulลฏ kde nemusรญ bรฝt zลejmรฉ kterรฝ svazek je nabรญzen | 1 |
384,259 | 11,386,331,519 | IssuesEvent | 2020-01-29 13:04:05 | tensfeldt/openNCA | https://api.github.com/repos/tensfeldt/openNCA | closed | 2020-01-13 tc121 M3SD computation engine crashes with incorrect error that for mrt_ivif_p DOF is non-numeric | M2 E2E Blocker priority | With commit dce4ed8
tc121 has been loaded to onedrive
```r
> results_list <- run_computation(data=d, map=mct, flag=flags, parameterset=parameterset)
Error in value[[3L]](cond) :
Error in mrt_ivif_p(conc = tmp_df[, map_data$CONC], time = tmp_df[, map_data$TIME], : Error in mrt_ivif_p: dof is not a numeric value
For SDEID 49420991
In addition: Warning messages:
1: In validate_timeconc_data(map, data, flag, verbose = verbose) :
No tau information provided in 'map'.
2: In validate_timeconc_data(map, data, flag, verbose = verbose) :
No told information provided in 'map'.
3: In run_computation(data = d, map = mct, flag = flags, parameterset = parameterset) :
'RETURNCOLS' values provided via 'map' are not used for this computation
4: In run_M3_SD_computation(data = merged_data, map = map_data, method = method, :
Show Traceback
Rerun with Debug
Error in value[[3L]](cond) :
Error in mrt_ivif_p(conc = tmp_df[, map_data$CONC], time = tmp_df[, map_data$TIME], : Error in mrt_ivif_p: dof is not a numeric value
For SDEID 49420991
> class(d$DOF)
[1] "numeric"
``` | 1.0 | 2020-01-13 tc121 M3SD computation engine crashes with incorrect error that for mrt_ivif_p DOF is non-numeric - With commit dce4ed8
tc121 has been loaded to onedrive
```r
> results_list <- run_computation(data=d, map=mct, flag=flags, parameterset=parameterset)
Error in value[[3L]](cond) :
Error in mrt_ivif_p(conc = tmp_df[, map_data$CONC], time = tmp_df[, map_data$TIME], : Error in mrt_ivif_p: dof is not a numeric value
For SDEID 49420991
In addition: Warning messages:
1: In validate_timeconc_data(map, data, flag, verbose = verbose) :
No tau information provided in 'map'.
2: In validate_timeconc_data(map, data, flag, verbose = verbose) :
No told information provided in 'map'.
3: In run_computation(data = d, map = mct, flag = flags, parameterset = parameterset) :
'RETURNCOLS' values provided via 'map' are not used for this computation
4: In run_M3_SD_computation(data = merged_data, map = map_data, method = method, :
Show Traceback
Rerun with Debug
Error in value[[3L]](cond) :
Error in mrt_ivif_p(conc = tmp_df[, map_data$CONC], time = tmp_df[, map_data$TIME], : Error in mrt_ivif_p: dof is not a numeric value
For SDEID 49420991
> class(d$DOF)
[1] "numeric"
``` | non_test | computation engine crashes with incorrect error that for mrt ivif p dof is non numeric with commit has been loaded to onedrive r results list run computation data d map mct flag flags parameterset parameterset error in value cond error in mrt ivif p conc tmp df time tmp df error in mrt ivif p dof is not a numeric value for sdeid in addition warning messages in validate timeconc data map data flag verbose verbose no tau information provided in map in validate timeconc data map data flag verbose verbose no told information provided in map in run computation data d map mct flag flags parameterset parameterset returncols values provided via map are not used for this computation in run sd computation data merged data map map data method method show traceback rerun with debug error in value cond error in mrt ivif p conc tmp df time tmp df error in mrt ivif p dof is not a numeric value for sdeid class d dof numeric | 0 |
32,346 | 4,763,010,133 | IssuesEvent | 2016-10-25 13:21:00 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Investigate flaky test-dgram-send-callback-buffer on FreeBSD | dgram freebsd test | https://ci.nodejs.org/job/node-test-commit-freebsd/3511/nodes=freebsd10-64/console
```
not ok 223 parallel/test-dgram-send-callback-buffer
# /usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/parallel/test-dgram-send-callback-buffer.js:19
# throw new Error('Timeout');
# ^
#
# Error: Timeout
# at Timeout._onTimeout (/usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/parallel/test-dgram-send-callback-buffer.js:19:9)
# at tryOnTimeout (timers.js:232:11)
# at Timer.listOnTimeout (timers.js:202:5)
---
duration_ms: 0.409
``` | 1.0 | Investigate flaky test-dgram-send-callback-buffer on FreeBSD - https://ci.nodejs.org/job/node-test-commit-freebsd/3511/nodes=freebsd10-64/console
```
not ok 223 parallel/test-dgram-send-callback-buffer
# /usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/parallel/test-dgram-send-callback-buffer.js:19
# throw new Error('Timeout');
# ^
#
# Error: Timeout
# at Timeout._onTimeout (/usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/parallel/test-dgram-send-callback-buffer.js:19:9)
# at tryOnTimeout (timers.js:232:11)
# at Timer.listOnTimeout (timers.js:202:5)
---
duration_ms: 0.409
``` | test | investigate flaky test dgram send callback buffer on freebsd not ok parallel test dgram send callback buffer usr home iojs build workspace node test commit freebsd nodes test parallel test dgram send callback buffer js throw new error timeout error timeout at timeout ontimeout usr home iojs build workspace node test commit freebsd nodes test parallel test dgram send callback buffer js at tryontimeout timers js at timer listontimeout timers js duration ms | 1 |
341,063 | 10,282,367,887 | IssuesEvent | 2019-08-26 10:56:37 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.youtube.com - site is not usable | browser-firefox-mobile engine-gecko priority-critical | <!-- @browser: Firefox Mobile (Car) 999293.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Tablet; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://m.youtube.com/
**Browser / Version**: Firefox Mobile (Car) 999293.0
**Operating System**: Android 202929393.02099
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: ฯฯฯฯฯฯฯ
**Steps to Reproduce**:
ยถโ~`
[](https://webcompat.com/uploads/2019/8/5f00d067-088f-47c5-8535-704fa1d1d1e2.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190819150103</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | 1.0 | m.youtube.com - site is not usable - <!-- @browser: Firefox Mobile (Car) 999293.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Tablet; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://m.youtube.com/
**Browser / Version**: Firefox Mobile (Car) 999293.0
**Operating System**: Android 202929393.02099
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: ฯฯฯฯฯฯฯ
**Steps to Reproduce**:
ยถโ~`
[](https://webcompat.com/uploads/2019/8/5f00d067-088f-47c5-8535-704fa1d1d1e2.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190819150103</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | non_test | m youtube com site is not usable url browser version firefox mobile car operating system android tested another browser yes problem type site is not usable description ฯฯฯฯฯฯฯ steps to reproduce ยถโ browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta from with โค๏ธ | 0 |
33,181 | 15,817,758,966 | IssuesEvent | 2021-04-05 15:03:15 | ropensci/targets | https://api.github.com/repos/ropensci/targets | closed | Shut down superfluous persistent workers earlier | topic: performance | ## Prework
* [x] Read and agree to the [code of conduct](https://ropensci.org/code-of-conduct/) and [contributing guidelines](https://github.com/ropensci/targets/blob/main/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/ropensci/targets/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) like [this one](https://github.com/ropensci/targets/issues/256#issuecomment-754229683) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Description
`targets` is supposed to shut down idle workers when there is no more work to be assigned to them. Currently, local targets (`deployment = "main"`) and dynamic branching cleanup steps count as work to be assigned, and this is preventing the shutdown of those idle workers.
## Reproducible example
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
### Dynamic branching
https://github.com/ropensci/targets/blob/c77355cd10ce7b861233a0da87fd423c847bc4f5/tests/performance/test-parallel.R#L37-L47
### Local targets
https://github.com/ropensci/targets/blob/c77355cd10ce7b861233a0da87fd423c847bc4f5/tests/performance/test-parallel.R#L53-L63
## Benchmarks
Currently, all workers stay running until the very end of the pipeline. Run in a basic terminal, monitor with `htop -d 1`, and filter on `R.home()`.
| True | Shut down superfluous persistent workers earlier - ## Prework
* [x] Read and agree to the [code of conduct](https://ropensci.org/code-of-conduct/) and [contributing guidelines](https://github.com/ropensci/targets/blob/main/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/ropensci/targets/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) like [this one](https://github.com/ropensci/targets/issues/256#issuecomment-754229683) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Description
`targets` is supposed to shut down idle workers when there is no more work to be assigned to them. Currently, local targets (`deployment = "main"`) and dynamic branching cleanup steps count as work to be assigned, and this is preventing the shutdown of those idle workers.
## Reproducible example
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
### Dynamic branching
https://github.com/ropensci/targets/blob/c77355cd10ce7b861233a0da87fd423c847bc4f5/tests/performance/test-parallel.R#L37-L47
### Local targets
https://github.com/ropensci/targets/blob/c77355cd10ce7b861233a0da87fd423c847bc4f5/tests/performance/test-parallel.R#L53-L63
## Benchmarks
Currently, all workers stay running until the very end of the pipeline. Run in a basic terminal, monitor with `htop -d 1`, and filter on `R.home()`.
| non_test | shut down superfluous persistent workers earlier prework read and agree to the and if there is whether open or closed comment on the existing thread instead of posting a new issue post a like so the maintainer can troubleshoot the problems you identify a reproducible example is runnable post enough r code and data so any onlooker can create the error on their own computer minimal reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand readable format your code according to the description targets is supposed to shut down idle workers when there is no more work to be assigned to them currently local targets deployment main and dynamic branching cleanup steps count as work to be assigned and this is preventing the shutdown of those idle workers reproducible example post a so the maintainer can troubleshoot the problems you identify a reproducible example is runnable post enough r code and data so any onlooker can create the error on their own computer minimal reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand readable format your code according to the dynamic branching local targets benchmarks currently all workers stay running until the very end of the pipeline run in a basic terminal monitor with htop d and filter on r home | 0 |
51,960 | 21,921,861,562 | IssuesEvent | 2022-05-22 17:44:12 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | aws_lb_ssl_negotiation_policy should allow list of ELBs to apply to | enhancement service/elbv2 stale | _This issue was originally opened by @mcraig88 as hashicorp/terraform#13989. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
v0.9.2
### Affected Resource(s)
Please list the resources as a list, for example:
aws_lb_ssl_negotiation_policy
### Current Behavior
Currently you can only assign aws_lb_ssl_negotiation_policy to a single ELB.
### Desired Behavior
To reduce code bloat, it would be more efficient if this resource would accept a list of ELB's to assign the negotiation policy to.
| 1.0 | aws_lb_ssl_negotiation_policy should allow list of ELBs to apply to - _This issue was originally opened by @mcraig88 as hashicorp/terraform#13989. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
v0.9.2
### Affected Resource(s)
Please list the resources as a list, for example:
aws_lb_ssl_negotiation_policy
### Current Behavior
Currently you can only assign aws_lb_ssl_negotiation_policy to a single ELB.
### Desired Behavior
To reduce code bloat, it would be more efficient if this resource would accept a list of ELB's to assign the negotiation policy to.
| non_test | aws lb ssl negotiation policy should allow list of elbs to apply to this issue was originally opened by as hashicorp terraform it was migrated here as part of the the original body of the issue is below terraform version affected resource s please list the resources as a list for example aws lb ssl negotiation policy current behavior currently you can only assign aws lb ssl negotiation policy to a single elb desired behavior to reduce code bloat it would be more efficient if this resource would accept a list of elb s to assign the negotiation policy to | 0 |
13,787 | 3,357,490,824 | IssuesEvent | 2015-11-19 01:52:18 | arecker/bennedetto | https://api.github.com/repos/arecker/bennedetto | opened | Travis Builds | testing up for grabs | As a maintainer or contributor, it would be sweet if I could run all the unit tests for the project with `make test`.
It should exercise
- Python tests (none right now)
- JavaScript tests
- JavaScript linting and build
- Requirements sanity | 1.0 | Travis Builds - As a maintainer or contributor, it would be sweet if I could run all the unit tests for the project with `make test`.
It should exercise
- Python tests (none right now)
- JavaScript tests
- JavaScript linting and build
- Requirements sanity | test | travis builds as a maintainer or contributor it would be sweet if i could run all the unit tests for the project with make test it should exercise python tests none right now javascript tests javascript linting and build requirements sanity | 1 |
71,439 | 15,195,889,768 | IssuesEvent | 2021-02-16 07:16:34 | valdisiljuconoks/EPiBootstrapArea | https://api.github.com/repos/valdisiljuconoks/EPiBootstrapArea | closed | CVE-2019-10744 (High) detected in lodash-2.4.2.tgz | security vulnerability | ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200831060755_XFTETR/archiveExtraction_OYEGUZ/20200831060755/ws-scm_depth_0/EPiBootstrapArea/src/EPiBootstrapArea.Forms/modules/_protected/Shell/Shell/11.1.0.0/ClientResources/lib/xstyle/package.json</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200831060755_XFTETR/archiveExtraction_OYEGUZ/20200831060755/ws-scm_depth_0/EPiBootstrapArea/src/EPiBootstrapArea.Forms/modules/_protected/Shell/Shell/11.1.0.0/ClientResources/lib/xstyle/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- intern-geezer-2.2.3.tgz (Root Library)
- digdug-1.4.0.tgz
- decompress-0.2.3.tgz
- map-key-0.1.5.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valdisiljuconoks/EPiBootstrapArea/commit/60f0eda7499358b5aca509c37cea35481efdf991">60f0eda7499358b5aca509c37cea35481efdf991</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790">https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: 4.17.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10744 (High) detected in lodash-2.4.2.tgz - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200831060755_XFTETR/archiveExtraction_OYEGUZ/20200831060755/ws-scm_depth_0/EPiBootstrapArea/src/EPiBootstrapArea.Forms/modules/_protected/Shell/Shell/11.1.0.0/ClientResources/lib/xstyle/package.json</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200831060755_XFTETR/archiveExtraction_OYEGUZ/20200831060755/ws-scm_depth_0/EPiBootstrapArea/src/EPiBootstrapArea.Forms/modules/_protected/Shell/Shell/11.1.0.0/ClientResources/lib/xstyle/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- intern-geezer-2.2.3.tgz (Root Library)
- digdug-1.4.0.tgz
- decompress-0.2.3.tgz
- map-key-0.1.5.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valdisiljuconoks/EPiBootstrapArea/commit/60f0eda7499358b5aca509c37cea35481efdf991">60f0eda7499358b5aca509c37cea35481efdf991</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790">https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: 4.17.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file tmp ws ua xftetr archiveextraction oyeguz ws scm depth epibootstraparea src epibootstraparea forms modules protected shell shell clientresources lib xstyle package json path to vulnerable library tmp ws ua xftetr archiveextraction oyeguz ws scm depth epibootstraparea src epibootstraparea forms modules protected shell shell clientresources lib xstyle node modules lodash package json dependency hierarchy intern geezer tgz root library digdug tgz decompress tgz map key tgz x lodash tgz vulnerable library found in head commit a href vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
229,106 | 18,282,864,196 | IssuesEvent | 2021-10-05 06:54:15 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | ICE in StableHasher | E-easy I-ICE E-needs-test T-compiler C-bug | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
The code in question is from [this change](https://fuchsia-review.googlesource.com/c/fuchsia/+/583209) in Fuchsia.
In order to reproduce, first [download Fuchsia](https://fuchsia.dev/fuchsia-src/get-started/get_fuchsia_source) and then [build](https://fuchsia.dev/fuchsia-src/get-started/build_fuchsia).
Once that's done, perform the following steps from inside the Fuchsia repository:
```
# Download a recent known-good version
$ git fetch https://fuchsia.googlesource.com/fuchsia refs/changes/03/583203/4 && git checkout FETCH_HEAD
# Use Netstack3 (if you're not on x86_64, substitute your target for `x64`)
$ fx set core.x64 --with //src/connectivity/network/netstack3:bin --cargo-toml-gen
# Do a full build as a prerequisite for the next step
$ fx build && fx build build/rust:cargo_toml_gen
# Generate a `Cargo.toml` file for Netstack3
$ fx gen-cargo //src/connectivity/network/netstack3:bin
# Check out the problematic code
$ git fetch https://fuchsia.googlesource.com/fuchsia refs/changes/09/583209/1 && git checkout FETCH_HEAD
# Try to build Netstack3
$ cd src/connectivity/network/netstack3 && cargo check --target x86_64-fuchsia
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Version from error output:
```
rustc 1.56.0-nightly (1f94abcda 2021-08-06) running on x86_64-unknown-linux-gnu
```
### Error output
```
error: internal compiler error: compiler/rustc_middle/src/ich/impls_ty.rs:94:17: StableHasher: unexpected region RePlaceholder(Placeholder { universe: U4, name: BrNamed(DefId(0:199 ~ netstack3[e4ac]::bindings::ethernet_worker::EthernetWorker::'a), 'a) })
thread 'rustc' panicked at 'Box<dyn Any>', compiler/rustc_errors/src/lib.rs:1046:9
stack backtrace:
0: std::panicking::begin_panic
1: std::panic::panic_any
2: rustc_errors::HandlerInner::bug
3: rustc_errors::Handler::bug
4: rustc_middle::util::bug::opt_span_bug_fmt::{{closure}}
5: rustc_middle::ty::context::tls::with_opt::{{closure}}
6: rustc_middle::ty::context::tls::with_opt
7: rustc_middle::util::bug::opt_span_bug_fmt
8: rustc_middle::util::bug::bug_fmt
9: rustc_middle::ich::impls_ty::<impl rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext> for rustc_middle::ty::sty::RegionKind>::hash_stable
10: <rustc_middle::ty::TyS as rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext>>::hash_stable
11: std::thread::local::LocalKey<T>::with
12: <rustc_middle::ty::TyS as rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext>>::hash_stable
13: std::thread::local::LocalKey<T>::with
14: rustc_middle::ich::impls_ty::<impl rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext> for rustc_middle::ty::sty::Binder<T>>::hash_stable
15: <T as rustc_query_system::dep_graph::dep_node::DepNodeParams<Ctxt>>::to_fingerprint
16: rustc_query_system::query::plumbing::get_query_impl
17: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::diagnostic_hir_wf_check
18: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtExt>::report_selection_error
19: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtPrivExt>::report_fulfillment_error
20: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtExt>::report_fulfillment_errors
21: rustc_typeck::check::fn_ctxt::_impl::<impl rustc_typeck::check::fn_ctxt::FnCtxt>::select_all_obligations_or_error
22: rustc_infer::infer::InferCtxtBuilder::enter
23: rustc_typeck::check::wfcheck::check_item_well_formed
24: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
25: rustc_data_structures::stack::ensure_sufficient_stack
26: rustc_query_system::query::plumbing::force_query_with_job
27: rustc_query_system::query::plumbing::get_query_impl
28: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::check_item_well_formed
29: <rustc_typeck::check::wfcheck::CheckTypeWellFormedVisitor as rustc_hir::intravisit::Visitor>::visit_item
30: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
31: rustc_data_structures::sync::par_for_each_in
32: rustc_session::session::Session::track_errors
33: rustc_typeck::check_crate
34: rustc_interface::passes::analysis
35: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
36: rustc_data_structures::stack::ensure_sufficient_stack
37: rustc_query_system::query::plumbing::force_query_with_job
38: rustc_query_system::query::plumbing::get_query_impl
39: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::analysis
40: rustc_interface::queries::<impl rustc_interface::interface::Compiler>::enter
41: rustc_span::with_source_map
42: rustc_interface::interface::create_compiler_and_run
43: scoped_tls::ScopedKey<T>::set
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.56.0-nightly (1f94abcda 2021-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z panic_abort_tests -C embed-bitcode=no -C debuginfo=2 -C incremental -C panic=abort -C link-arg=--pack-dyn-relocs=relr -C link-args=-zstack-size=0x200000 -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/gen/zircon/public/sysroot/cpp/lib -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/fdio -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/syslog -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/trace-engine -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/prebuilt/third_party/clang/linux-x64/lib/clang/14.0.0/x86_64-fuchsia/lib -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/user.vdso_x64 -C link-arg=--sysroot=/usr/local/google/home/joshlf/workspace/fuchsia/out/default/gen/zircon/public/sysroot/cpp -C link-arg=/usr/local/google/home/joshlf/workspace/fuchsia/out/default/user.libc_x64/obj/zircon/system/ulib/c/crt1.Scrt1.cc.o --crate-type bin
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [check_item_well_formed] checking that `bindings::ethernet_worker::<impl at src/bindings/ethernet_worker.rs:47:1: 112:2>` is well-formed
#1 [analysis] running analysis passes on this crate
end of query stack
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"hameerabbasi"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | 1.0 | ICE in StableHasher - <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
The code in question is from [this change](https://fuchsia-review.googlesource.com/c/fuchsia/+/583209) in Fuchsia.
In order to reproduce, first [download Fuchsia](https://fuchsia.dev/fuchsia-src/get-started/get_fuchsia_source) and then [build](https://fuchsia.dev/fuchsia-src/get-started/build_fuchsia).
Once that's done, perform the following steps from inside the Fuchsia repository:
```
# Download a recent known-good version
$ git fetch https://fuchsia.googlesource.com/fuchsia refs/changes/03/583203/4 && git checkout FETCH_HEAD
# Use Netstack3 (if you're not on x86_64, substitute your target for `x64`)
$ fx set core.x64 --with //src/connectivity/network/netstack3:bin --cargo-toml-gen
# Do a full build as a prerequisite for the next step
$ fx build && fx build build/rust:cargo_toml_gen
# Generate a `Cargo.toml` file for Netstack3
$ fx gen-cargo //src/connectivity/network/netstack3:bin
# Check out the problematic code
$ git fetch https://fuchsia.googlesource.com/fuchsia refs/changes/09/583209/1 && git checkout FETCH_HEAD
# Try to build Netstack3
$ cd src/connectivity/network/netstack3 && cargo check --target x86_64-fuchsia
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Version from error output:
```
rustc 1.56.0-nightly (1f94abcda 2021-08-06) running on x86_64-unknown-linux-gnu
```
### Error output
```
error: internal compiler error: compiler/rustc_middle/src/ich/impls_ty.rs:94:17: StableHasher: unexpected region RePlaceholder(Placeholder { universe: U4, name: BrNamed(DefId(0:199 ~ netstack3[e4ac]::bindings::ethernet_worker::EthernetWorker::'a), 'a) })
thread 'rustc' panicked at 'Box<dyn Any>', compiler/rustc_errors/src/lib.rs:1046:9
stack backtrace:
0: std::panicking::begin_panic
1: std::panic::panic_any
2: rustc_errors::HandlerInner::bug
3: rustc_errors::Handler::bug
4: rustc_middle::util::bug::opt_span_bug_fmt::{{closure}}
5: rustc_middle::ty::context::tls::with_opt::{{closure}}
6: rustc_middle::ty::context::tls::with_opt
7: rustc_middle::util::bug::opt_span_bug_fmt
8: rustc_middle::util::bug::bug_fmt
9: rustc_middle::ich::impls_ty::<impl rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext> for rustc_middle::ty::sty::RegionKind>::hash_stable
10: <rustc_middle::ty::TyS as rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext>>::hash_stable
11: std::thread::local::LocalKey<T>::with
12: <rustc_middle::ty::TyS as rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext>>::hash_stable
13: std::thread::local::LocalKey<T>::with
14: rustc_middle::ich::impls_ty::<impl rustc_data_structures::stable_hasher::HashStable<rustc_middle::ich::hcx::StableHashingContext> for rustc_middle::ty::sty::Binder<T>>::hash_stable
15: <T as rustc_query_system::dep_graph::dep_node::DepNodeParams<Ctxt>>::to_fingerprint
16: rustc_query_system::query::plumbing::get_query_impl
17: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::diagnostic_hir_wf_check
18: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtExt>::report_selection_error
19: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtPrivExt>::report_fulfillment_error
20: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::error_reporting::InferCtxtExt>::report_fulfillment_errors
21: rustc_typeck::check::fn_ctxt::_impl::<impl rustc_typeck::check::fn_ctxt::FnCtxt>::select_all_obligations_or_error
22: rustc_infer::infer::InferCtxtBuilder::enter
23: rustc_typeck::check::wfcheck::check_item_well_formed
24: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
25: rustc_data_structures::stack::ensure_sufficient_stack
26: rustc_query_system::query::plumbing::force_query_with_job
27: rustc_query_system::query::plumbing::get_query_impl
28: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::check_item_well_formed
29: <rustc_typeck::check::wfcheck::CheckTypeWellFormedVisitor as rustc_hir::intravisit::Visitor>::visit_item
30: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
31: rustc_data_structures::sync::par_for_each_in
32: rustc_session::session::Session::track_errors
33: rustc_typeck::check_crate
34: rustc_interface::passes::analysis
35: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
36: rustc_data_structures::stack::ensure_sufficient_stack
37: rustc_query_system::query::plumbing::force_query_with_job
38: rustc_query_system::query::plumbing::get_query_impl
39: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::analysis
40: rustc_interface::queries::<impl rustc_interface::interface::Compiler>::enter
41: rustc_span::with_source_map
42: rustc_interface::interface::create_compiler_and_run
43: scoped_tls::ScopedKey<T>::set
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.56.0-nightly (1f94abcda 2021-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z panic_abort_tests -C embed-bitcode=no -C debuginfo=2 -C incremental -C panic=abort -C link-arg=--pack-dyn-relocs=relr -C link-args=-zstack-size=0x200000 -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/gen/zircon/public/sysroot/cpp/lib -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/fdio -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/syslog -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/x64-shared/gen/zircon/public/lib/trace-engine -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/prebuilt/third_party/clang/linux-x64/lib/clang/14.0.0/x86_64-fuchsia/lib -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default -C link-arg=-L/usr/local/google/home/joshlf/workspace/fuchsia/out/default/user.vdso_x64 -C link-arg=--sysroot=/usr/local/google/home/joshlf/workspace/fuchsia/out/default/gen/zircon/public/sysroot/cpp -C link-arg=/usr/local/google/home/joshlf/workspace/fuchsia/out/default/user.libc_x64/obj/zircon/system/ulib/c/crt1.Scrt1.cc.o --crate-type bin
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [check_item_well_formed] checking that `bindings::ethernet_worker::<impl at src/bindings/ethernet_worker.rs:47:1: 112:2>` is well-formed
#1 [analysis] running analysis passes on this crate
end of query stack
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"hameerabbasi"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | test | ice in stablehasher thank you for finding an internal compiler error ๐ง if possible try to provide a minimal verifiable example you can read rust bug minimization patterns for how to create smaller examples code the code in question is from in fuchsia in order to reproduce first and then once that s done perform the following steps from inside the fuchsia repository download a recent known good version git fetch refs changes git checkout fetch head use if you re not on substitute your target for fx set core with src connectivity network bin cargo toml gen do a full build as a prerequisite for the next step fx build fx build build rust cargo toml gen generate a cargo toml file for fx gen cargo src connectivity network bin check out the problematic code git fetch refs changes git checkout fetch head try to build cd src connectivity network cargo check target fuchsia meta if you re using the stable version of the compiler you should also check if the bug also exists in the beta or nightly versions version from error output rustc nightly running on unknown linux gnu error output error internal compiler error compiler rustc middle src ich impls ty rs stablehasher unexpected region replaceholder placeholder universe name brnamed defid bindings ethernet worker ethernetworker a a thread rustc panicked at box compiler rustc errors src lib rs stack backtrace std panicking begin panic std panic panic any rustc errors handlerinner bug rustc errors handler bug rustc middle util bug opt span bug fmt closure rustc middle ty context tls with opt closure rustc middle ty context tls with opt rustc middle util bug opt span bug fmt rustc middle util bug bug fmt rustc middle ich impls ty for rustc middle ty sty regionkind hash stable hash stable std thread local localkey with hash stable std thread local localkey with rustc middle ich impls ty for rustc middle ty sty binder hash stable to fingerprint rustc query system query plumbing get query impl diagnostic hir wf check report selection error report fulfillment error report fulfillment errors rustc typeck check fn ctxt impl select all obligations or error rustc infer infer inferctxtbuilder enter rustc typeck check wfcheck check item well formed rustc query system dep graph graph depgraph with task impl rustc data structures stack ensure sufficient stack rustc query system query plumbing force query with job rustc query system query plumbing get query impl check item well formed visit item as core ops function fnonce call once rustc data structures sync par for each in rustc session session session track errors rustc typeck check crate rustc interface passes analysis rustc query system dep graph graph depgraph with task impl rustc data structures stack ensure sufficient stack rustc query system query plumbing force query with job rustc query system query plumbing get query impl analysis rustc interface queries enter rustc span with source map rustc interface interface create compiler and run scoped tls scopedkey set note some details are omitted run with rust backtrace full for a verbose backtrace note the compiler unexpectedly panicked this is a bug note we would appreciate a bug report note rustc nightly running on unknown linux gnu note compiler flags z panic abort tests c embed bitcode no c debuginfo c incremental c panic abort c link arg pack dyn relocs relr c link args zstack size c link arg l usr local google home joshlf workspace fuchsia out default gen zircon public sysroot cpp lib c link arg l usr local google home joshlf workspace fuchsia out default shared gen zircon public lib fdio c link arg l usr local google home joshlf workspace fuchsia out default shared gen zircon public lib syslog c link arg l usr local google home joshlf workspace fuchsia out default shared gen zircon public lib trace engine c link arg l usr local google home joshlf workspace fuchsia prebuilt third party clang linux lib clang fuchsia lib c link arg l usr local google home joshlf workspace fuchsia out default c link arg l usr local google home joshlf workspace fuchsia out default user vdso c link arg sysroot usr local google home joshlf workspace fuchsia out default gen zircon public sysroot cpp c link arg usr local google home joshlf workspace fuchsia out default user libc obj zircon system ulib c cc o crate type bin note some of the compiler flags provided by cargo are hidden query stack during panic checking that bindings ethernet worker is well formed running analysis passes on this crate end of query stack | 1 |
118,965 | 25,415,191,327 | IssuesEvent | 2022-11-22 23:03:47 | sergiomesasyelamos2000/CC-Proyecto-22-23 | https://api.github.com/repos/sergiomesasyelamos2000/CC-Proyecto-22-23 | closed | Creaciรณn de servicio CRUD | task code | Creaciรณn de servicio CRUD utilizado tanto para los usuarios como para los libros.Mรฉtodos creador: finadAll, findByPropertie, delete, update | 1.0 | Creaciรณn de servicio CRUD - Creaciรณn de servicio CRUD utilizado tanto para los usuarios como para los libros.Mรฉtodos creador: finadAll, findByPropertie, delete, update | non_test | creaciรณn de servicio crud creaciรณn de servicio crud utilizado tanto para los usuarios como para los libros mรฉtodos creador finadall findbypropertie delete update | 0 |
261,680 | 27,810,344,784 | IssuesEvent | 2023-03-18 03:16:40 | KOSASIH/SilkRoad | https://api.github.com/repos/KOSASIH/SilkRoad | closed | tinymce-4.3.3.min.js: 10 vulnerabilities (highest severity is: 7.6) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p></summary>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (tinymce version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2020-0008](https://github.com/tinymce/tinymce-dist/commit/7fcdd149d2e2f6013c78a965b8fab2bbe011de4f) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.6 | tinymce-4.3.3.min.js | Direct | 4.9.7,5.1.4 | ❌ |
| [WS-2021-0001](https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tinymce-4.3.3.min.js | Direct | tinymce - 5.6.0 | ❌ |
| [WS-2021-0406](https://github.com/advisories/GHSA-5h9g-x5rv-25wg) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | TinyMCE - 5.9.0, tinymce - 5.9.0, tinymce/tinymce - 5.9.0 | ❌ |
| [CVE-2019-1010091](https://www.mend.io/vulnerability-database/CVE-2019-1010091) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 4.9.10, 5.2.2 | ❌ |
| [CVE-2020-12648](https://www.mend.io/vulnerability-database/CVE-2020-12648) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | 4.9.11,5.4.1 | ❌ |
| [WS-2021-0133](https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.7.1 | ❌ |
| [WS-2020-0142](https://github.com/advisories/GHSA-vrv8-v4w8-f95h) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.4.1, 4.9.11 | ❌ |
| [WS-2021-0025](https://github.com/advisories/GHSA-w7jx-j77m-wp65) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.6.0 | ❌ |
| [WS-2021-0413](https://github.com/advisories/GHSA-r8hm-w5f7-wj39) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | TinyMCE - 5.10.0, tinymce/tinymce - 5.10.0, TinyMCE - 5.10.0 | ❌ |
| [CVE-2020-17480](https://www.mend.io/vulnerability-database/CVE-2020-17480) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | 4.9.7, 5.1.4 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2020-0008</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the core parser, "paste" and "visualchars" plugins.
<p>Publish Date: 2019-12-11
<p>URL: <a href=https://github.com/tinymce/tinymce-dist/commit/7fcdd149d2e2f6013c78a965b8fab2bbe011de4f>WS-2020-0008</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16082">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16082</a></p>
<p>Release Date: 2019-12-11</p>
<p>Fix Resolution: 4.9.7,5.1.4</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2021-0001</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A regex denial of service (ReDoS) vulnerability was discovered in a dependency of the codesample plugin. The vulnerability allowed poorly formed ruby code samples to lock up the browser while performing syntax highlighting. This impacts users of the codesample plugin using TinyMCE 5.5.1 or lower.
This vulnerability has been patched in TinyMCE 5.6.0 by upgrading to a version of the dependency without the vulnerability.
<p>Publish Date: 2021-01-05
<p>URL: <a href=https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55>WS-2021-0001</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55">https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55</a></p>
<p>Release Date: 2021-01-05</p>
<p>Fix Resolution: tinymce - 5.6.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0406</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the schema validation logic of the core parser. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or editor APIs. This malicious content could then end up in content published outside the editor, if no server-side sanitization was performed. This impacts all users who are using TinyMCE 5.8.2 or lower.
<p>Publish Date: 2021-10-22
<p>URL: <a href=https://github.com/advisories/GHSA-5h9g-x5rv-25wg>WS-2021-0406</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5h9g-x5rv-25wg">https://github.com/advisories/GHSA-5h9g-x5rv-25wg</a></p>
<p>Release Date: 2021-10-22</p>
<p>Fix Resolution: TinyMCE - 5.9.0, tinymce - 5.9.0, tinymce/tinymce - 5.9.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-1010091</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
tinymce 4.7.11, 4.7.12 is affected by: CWE-79: Improper Neutralization of Input During Web Page Generation. The impact is: JavaScript code execution. The component is: Media element. The attack vector is: The victim must paste malicious content to media element's embed tab.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-1010091>CVE-2019-1010091</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-c78w-2gw7-gjv3">https://github.com/advisories/GHSA-c78w-2gw7-gjv3</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: tinymce - 4.9.10, 5.2.2</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-12648</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability in TinyMCE 5.2.1 and earlier allows remote attackers to inject arbitrary web script when configured in classic editing mode.
<p>Publish Date: 2020-08-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12648>CVE-2020-12648</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12648">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12648</a></p>
<p>Release Date: 2020-08-17</p>
<p>Fix Resolution: 4.9.11,5.4.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0133</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Cross-site scripting vulnerability was found in TinyMCE before 5.7.1. A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser for form elements. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs, and then submitting the form. However, as TinyMCE does not allow forms to be submitted while editing, the vulnerability could only be triggered when the content was previewed or rendered outside of the editor.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe>WS-2021-0133</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5vm8-hhgr-jcjp">https://github.com/advisories/GHSA-5vm8-hhgr-jcjp</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: tinymce - 5.7.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2020-0142</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the core parser. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor via the clipboard or APIs. This impacts all users who are using TinyMCE 4.9.10 or lower and TinyMCE 5.4.0 or lower.
<p>Publish Date: 2020-08-11
<p>URL: <a href=https://github.com/advisories/GHSA-vrv8-v4w8-f95h>WS-2020-0142</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vrv8-v4w8-f95h">https://github.com/advisories/GHSA-vrv8-v4w8-f95h</a></p>
<p>Release Date: 2020-08-11</p>
<p>Fix Resolution: tinymce - 5.4.1, 4.9.11</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0025</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser of TinyMCE. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs. This impacts all users who are using TinyMCE 5.5.1 or lower.
The issue has been fixed in TinyMCE 5.6.0.
<p>Publish Date: 2021-02-19
<p>URL: <a href=https://github.com/advisories/GHSA-w7jx-j77m-wp65>WS-2021-0025</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-w7jx-j77m-wp65">https://github.com/advisories/GHSA-w7jx-j77m-wp65</a></p>
<p>Release Date: 2021-02-19</p>
<p>Fix Resolution: tinymce - 5.6.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0413</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the URL processing logic of the image and link plugins. The vulnerability allowed arbitrary JavaScript execution when updating an image or link using a specially crafted URL. This issue only impacted users while editing and the dangerous URLs were stripped in any content extracted from the editor. This impacts all users who are using TinyMCE 5.9.2 or lower.
<p>Publish Date: 2021-11-02
<p>URL: <a href=https://github.com/advisories/GHSA-r8hm-w5f7-wj39>WS-2021-0413</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r8hm-w5f7-wj39">https://github.com/advisories/GHSA-r8hm-w5f7-wj39</a></p>
<p>Release Date: 2021-11-02</p>
<p>Fix Resolution: TinyMCE - 5.10.0, tinymce/tinymce - 5.10.0, TinyMCE - 5.10.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-17480</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TinyMCE before 4.9.7 and 5.x before 5.1.4 allows XSS in the core parser, the paste plugin, and the visualchars plugin by using the clipboard or APIs to insert content into the editor.
<p>Publish Date: 2020-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-17480>CVE-2020-17480</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-08-11</p>
<p>Fix Resolution: 4.9.7, 5.1.4</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | tinymce-4.3.3.min.js: 10 vulnerabilities (highest severity is: 7.6) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p></summary>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (tinymce version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2020-0008](https://github.com/tinymce/tinymce-dist/commit/7fcdd149d2e2f6013c78a965b8fab2bbe011de4f) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.6 | tinymce-4.3.3.min.js | Direct | 4.9.7,5.1.4 | ❌ |
| [WS-2021-0001](https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tinymce-4.3.3.min.js | Direct | tinymce - 5.6.0 | ❌ |
| [WS-2021-0406](https://github.com/advisories/GHSA-5h9g-x5rv-25wg) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | TinyMCE - 5.9.0, tinymce - 5.9.0, tinymce/tinymce - 5.9.0 | ❌ |
| [CVE-2019-1010091](https://www.mend.io/vulnerability-database/CVE-2019-1010091) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 4.9.10, 5.2.2 | ❌ |
| [CVE-2020-12648](https://www.mend.io/vulnerability-database/CVE-2020-12648) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | 4.9.11,5.4.1 | ❌ |
| [WS-2021-0133](https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.7.1 | ❌ |
| [WS-2020-0142](https://github.com/advisories/GHSA-vrv8-v4w8-f95h) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.4.1, 4.9.11 | ❌ |
| [WS-2021-0025](https://github.com/advisories/GHSA-w7jx-j77m-wp65) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | tinymce - 5.6.0 | ❌ |
| [WS-2021-0413](https://github.com/advisories/GHSA-r8hm-w5f7-wj39) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | TinyMCE - 5.10.0, tinymce/tinymce - 5.10.0, TinyMCE - 5.10.0 | ❌ |
| [CVE-2020-17480](https://www.mend.io/vulnerability-database/CVE-2020-17480) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tinymce-4.3.3.min.js | Direct | 4.9.7, 5.1.4 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2020-0008</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the core parser, "paste" and "visualchars" plugins.
<p>Publish Date: 2019-12-11
<p>URL: <a href=https://github.com/tinymce/tinymce-dist/commit/7fcdd149d2e2f6013c78a965b8fab2bbe011de4f>WS-2020-0008</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16082">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16082</a></p>
<p>Release Date: 2019-12-11</p>
<p>Fix Resolution: 4.9.7,5.1.4</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2021-0001</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A regex denial of service (ReDoS) vulnerability was discovered in a dependency of the codesample plugin. The vulnerability allowed poorly formed ruby code samples to lock up the browser while performing syntax highlighting. This impacts users of the codesample plugin using TinyMCE 5.5.1 or lower.
This vulnerability has been patched in TinyMCE 5.6.0 by upgrading to a version of the dependency without the vulnerability.
<p>Publish Date: 2021-01-05
<p>URL: <a href=https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55>WS-2021-0001</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55">https://github.com/tinymce/tinymce/security/advisories/GHSA-h96f-fc7c-9r55</a></p>
<p>Release Date: 2021-01-05</p>
<p>Fix Resolution: tinymce - 5.6.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0406</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the schema validation logic of the core parser. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or editor APIs. This malicious content could then end up in content published outside the editor, if no server-side sanitization was performed. This impacts all users who are using TinyMCE 5.8.2 or lower.
<p>Publish Date: 2021-10-22
<p>URL: <a href=https://github.com/advisories/GHSA-5h9g-x5rv-25wg>WS-2021-0406</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5h9g-x5rv-25wg">https://github.com/advisories/GHSA-5h9g-x5rv-25wg</a></p>
<p>Release Date: 2021-10-22</p>
<p>Fix Resolution: TinyMCE - 5.9.0, tinymce - 5.9.0, tinymce/tinymce - 5.9.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-1010091</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
tinymce 4.7.11, 4.7.12 is affected by: CWE-79: Improper Neutralization of Input During Web Page Generation. The impact is: JavaScript code execution. The component is: Media element. The attack vector is: The victim must paste malicious content to media element's embed tab.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-1010091>CVE-2019-1010091</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-c78w-2gw7-gjv3">https://github.com/advisories/GHSA-c78w-2gw7-gjv3</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: tinymce - 4.9.10, 5.2.2</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-12648</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability in TinyMCE 5.2.1 and earlier allows remote attackers to inject arbitrary web script when configured in classic editing mode.
<p>Publish Date: 2020-08-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12648>CVE-2020-12648</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12648">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12648</a></p>
<p>Release Date: 2020-08-17</p>
<p>Fix Resolution: 4.9.11,5.4.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0133</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Cross-site scripting vulnerability was found in TinyMCE before 5.7.1. A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser for form elements. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs, and then submitting the form. However, as TinyMCE does not allow forms to be submitted while editing, the vulnerability could only be triggered when the content was previewed or rendered outside of the editor.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe>WS-2021-0133</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5vm8-hhgr-jcjp">https://github.com/advisories/GHSA-5vm8-hhgr-jcjp</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: tinymce - 5.7.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2020-0142</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the core parser. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor via the clipboard or APIs. This impacts all users who are using TinyMCE 4.9.10 or lower and TinyMCE 5.4.0 or lower.
<p>Publish Date: 2020-08-11
<p>URL: <a href=https://github.com/advisories/GHSA-vrv8-v4w8-f95h>WS-2020-0142</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vrv8-v4w8-f95h">https://github.com/advisories/GHSA-vrv8-v4w8-f95h</a></p>
<p>Release Date: 2020-08-11</p>
<p>Fix Resolution: tinymce - 5.4.1, 4.9.11</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0025</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser of TinyMCE. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs. This impacts all users who are using TinyMCE 5.5.1 or lower.
The issue has been fixed in TinyMCE 5.6.0.
<p>Publish Date: 2021-02-19
<p>URL: <a href=https://github.com/advisories/GHSA-w7jx-j77m-wp65>WS-2021-0025</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-w7jx-j77m-wp65">https://github.com/advisories/GHSA-w7jx-j77m-wp65</a></p>
<p>Release Date: 2021-02-19</p>
<p>Fix Resolution: tinymce - 5.6.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0413</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A cross-site scripting (XSS) vulnerability was discovered in the URL processing logic of the image and link plugins. The vulnerability allowed arbitrary JavaScript execution when updating an image or link using a specially crafted URL. This issue only impacted users while editing and the dangerous URLs were stripped in any content extracted from the editor. This impacts all users who are using TinyMCE 5.9.2 or lower.
<p>Publish Date: 2021-11-02
<p>URL: <a href=https://github.com/advisories/GHSA-r8hm-w5f7-wj39>WS-2021-0413</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r8hm-w5f7-wj39">https://github.com/advisories/GHSA-r8hm-w5f7-wj39</a></p>
<p>Release Date: 2021-11-02</p>
<p>Fix Resolution: TinyMCE - 5.10.0, tinymce/tinymce - 5.10.0, TinyMCE - 5.10.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-17480</summary>
### Vulnerable Library - <b>tinymce-4.3.3.min.js</b></p>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.3.3/tinymce.min.js</a></p>
<p>Path to vulnerable library: /taikhoan/script/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.3.3.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/SilkRoad/commit/412334ad3a52dadfe1cc672e6d4f37cd4b113988">412334ad3a52dadfe1cc672e6d4f37cd4b113988</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TinyMCE before 4.9.7 and 5.x before 5.1.4 allows XSS in the core parser, the paste plugin, and the visualchars plugin by using the clipboard or APIs to insert content into the editor.
<p>Publish Date: 2020-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-17480>CVE-2020-17480</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-08-11</p>
<p>Fix Resolution: 4.9.7, 5.1.4</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_test | tinymce min js vulnerabilities highest severity is vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js found in head commit a href vulnerabilities cve severity cvss dependency type fixed in tinymce version remediation available high tinymce min js direct high tinymce min js direct tinymce medium tinymce min js direct tinymce tinymce tinymce tinymce medium tinymce min js direct tinymce medium tinymce min js direct medium tinymce min js direct tinymce medium tinymce min js direct tinymce medium tinymce min js direct tinymce medium tinymce min js direct tinymce tinymce tinymce tinymce medium tinymce min js direct details ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability was discovered in the core parser paste and visualchars plugins publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a regex denial of service redos vulnerability was discovered in a dependency of the codesample plugin the vulnerability allowed poorly formed ruby code samples to lock up the browser while performing syntax highlighting this impacts users of the codesample plugin using tinymce or lower this vulnerability has been patched in tinymce by upgrading to a version of the dependency without the vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability was discovered in the schema validation logic of the core parser the vulnerability allowed arbitrary javascript execution when inserting a specially crafted piece of content into the editor using the clipboard or editor apis this malicious content could then end up in content published outside the editor if no server side sanitization was performed this impacts all users who are using tinymce or lower publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce tinymce tinymce tinymce step up your open source security game with mend cve vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details tinymce is affected by cwe improper neutralization of input during web page generation the impact is javascript code execution the component is media element the attack vector is the victim must paste malicious content to media element s embed tab publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend cve vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability in tinymce and earlier allows remote attackers to inject arbitrary web script when configured in classic editing mode publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details cross site scripting vulnerability was found in tinymce before a cross site scripting xss vulnerability was discovered in the url sanitization logic of the core parser for form elements the vulnerability allowed arbitrary javascript execution when inserting a specially crafted piece of content into the editor using the clipboard or apis and then submitting the form however as tinymce does not allow forms to be submitted while editing the vulnerability could only be triggered when the content was previewed or rendered outside of the editor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability was discovered in the core parser the vulnerability allowed arbitrary javascript execution when inserting a specially crafted piece of content into the editor via the clipboard or apis this impacts all users who are using tinymce or lower and tinymce or lower publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability was discovered in the url sanitization logic of the core parser of tinymce the vulnerability allowed arbitrary javascript execution when inserting a specially crafted piece of content into the editor using the clipboard or apis this impacts all users who are using tinymce or lower the issue has been fixed in tinymce publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend ws vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details a cross site scripting xss vulnerability was discovered in the url processing logic of the image and link plugins the vulnerability allowed arbitrary javascript execution when updating an image or link using a specially crafted url this issue only impacted users while editing and the dangerous urls were stripped in any content extracted from the editor this impacts all users who are using tinymce or lower publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce tinymce tinymce tinymce step up your open source security game with mend cve vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library taikhoan script tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details tinymce before and x before allows xss in the core parser the paste plugin and the visualchars plugin by using the clipboard or apis to insert content into the editor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend | 0 |
7,364 | 9,606,780,161 | IssuesEvent | 2019-05-11 13:32:10 | JonnyNova/Rimworld-Shields | https://api.github.com/repos/JonnyNova/Rimworld-Shields | opened | Combat Extended enhancements | compatibility enhancement | - [ ] Simulate mortar explosion by blocking fragments
- [ ] Apply secondary damage types to shields | True | Combat Extended enhancements - - [ ] Simulate mortar explosion by blocking fragments
- [ ] Apply secondary damage types to shields | non_test | combat extended enhancements simulate mortar explosion by blocking fragments apply secondary damage types to shields | 0 |
131,738 | 28,015,098,801 | IssuesEvent | 2023-03-27 21:52:55 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Inconsistency in "ammonites" | Priority-Normal (Not urgent) Function-CodeTables CodeTableCleanup | UAM has ammonites cataloged with the part name "exoskeleton" and DMNS has used "shell (fossil)".
As I am about to migrate in a bunch from NMMNH, I'd like to see if we can make a recommendation and perhaps more consistent use across Arctos. What does everyone prefer?
@Nicole-Ridgwell-NMMNHS @dperriguey @sharpphyl | 2.0 | Inconsistency in "ammonites" - UAM has ammonites cataloged with the part name "exoskeleton" and DMNS has used "shell (fossil)".
As I am about to migrate in a bunch from NMMNH, I'd like to see if we can make a recommendation and perhaps more consistent use across Arctos. What does everyone prefer?
@Nicole-Ridgwell-NMMNHS @dperriguey @sharpphyl | non_test | inconsistency in ammonites uam has ammonites cataloged with the part name exoskeleton and dmns has used shell fossil as i am about to migrate in a bunch from nmmnh i d like to see if we can make a recommendation and perhaps more consistent use across arctos what does everyone prefer nicole ridgwell nmmnhs dperriguey sharpphyl | 0 |
114,533 | 9,741,910,592 | IssuesEvent | 2019-06-02 12:59:31 | aiidateam/aiida_core | https://api.github.com/repos/aiidateam/aiida_core | closed | Random failing tests `verdi quicksetup` on SqlAlchemy python 3 | aiida-core 1.x priority/quality-of-life topic/testing topic/verdi type/bug | I have noticed this test failing multiple times the recent days, always on python 3 and SqlAlchemy. Not sure where it is coming from:
```
======================================================================
FAIL: test_quicksetup_from_config_file (aiida.backends.tests.cmdline.commands.test_setup.TestVerdiSetup)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/tests/utils/configuration.py", line 105, in decorated_function
function(*args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/tests/cmdline/commands/test_setup.py", line 90, in test_quicksetup_from_config_file
self.assertClickResultNoException(result)
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/testbase.py", line 187, in assertClickResultNoException
self.assertIsNone(cli_result.exception, ''.join(traceback.format_exception(*cli_result.exc_info)))
AssertionError: SystemExit(<ExitCode.CRITICAL: 1>,) is not None : Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: role "user" does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 72, in setup
backend.migrate()
File "/home/travis/build/aiidateam/aiida_core/aiida/orm/implementation/django/backend.py", line 53, in migrate
migrate_database()
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/djsite/utils.py", line 54, in migrate_database
call_command('migrate')
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/__init__.py", line 131, in call_command
return command.execute(*args, **defaults)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/commands/migrate.py", line 83, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/loader.py", line 52, in __init__
self.build_graph()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/loader.py", line 210, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 65, in applied_migrations
self.ensure_schema()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 52, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 254, in cursor
return self._cursor()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 229, in _cursor
self.ensure_connection()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL: role "user" does not exist
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/testing.py", line 326, in invoke
cli.main(args=args or (), prog_name=prog_name, **extra)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 173, in quicksetup
ctx.invoke(setup, **setup_parameters)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 75, in setup
'database migration failed, probably because connection details are incorrect:\n{}'.format(exception))
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/utils/echo.py", line 114, in echo_critical
sys.exit(ExitCode.CRITICAL)
SystemExit: ExitCode.CRITICAL
``` | 1.0 | Random failing tests `verdi quicksetup` on SqlAlchemy python 3 - I have noticed this test failing multiple times the recent days, always on python 3 and SqlAlchemy. Not sure where it is coming from:
```
======================================================================
FAIL: test_quicksetup_from_config_file (aiida.backends.tests.cmdline.commands.test_setup.TestVerdiSetup)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/tests/utils/configuration.py", line 105, in decorated_function
function(*args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/tests/cmdline/commands/test_setup.py", line 90, in test_quicksetup_from_config_file
self.assertClickResultNoException(result)
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/testbase.py", line 187, in assertClickResultNoException
self.assertIsNone(cli_result.exception, ''.join(traceback.format_exception(*cli_result.exc_info)))
AssertionError: SystemExit(<ExitCode.CRITICAL: 1>,) is not None : Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: role "user" does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 72, in setup
backend.migrate()
File "/home/travis/build/aiidateam/aiida_core/aiida/orm/implementation/django/backend.py", line 53, in migrate
migrate_database()
File "/home/travis/build/aiidateam/aiida_core/aiida/backends/djsite/utils.py", line 54, in migrate_database
call_command('migrate')
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/__init__.py", line 131, in call_command
return command.execute(*args, **defaults)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/core/management/commands/migrate.py", line 83, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/loader.py", line 52, in __init__
self.build_graph()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/loader.py", line 210, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 65, in applied_migrations
self.ensure_schema()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 52, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 254, in cursor
return self._cursor()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 229, in _cursor
self.ensure_connection()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL: role "user" does not exist
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/testing.py", line 326, in invoke
cli.main(args=args or (), prog_name=prog_name, **extra)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 173, in quicksetup
ctx.invoke(setup, **setup_parameters)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/commands/cmd_setup.py", line 75, in setup
'database migration failed, probably because connection details are incorrect:\n{}'.format(exception))
File "/home/travis/build/aiidateam/aiida_core/aiida/cmdline/utils/echo.py", line 114, in echo_critical
sys.exit(ExitCode.CRITICAL)
SystemExit: ExitCode.CRITICAL
``` | test | random failing tests verdi quicksetup on sqlalchemy python i have noticed this test failing multiple times the recent days always on python and sqlalchemy not sure where it is coming from fail test quicksetup from config file aiida backends tests cmdline commands test setup testverdisetup traceback most recent call last file home travis build aiidateam aiida core aiida backends tests utils configuration py line in decorated function function args kwargs file home travis build aiidateam aiida core aiida backends tests cmdline commands test setup py line in test quicksetup from config file self assertclickresultnoexception result file home travis build aiidateam aiida core aiida backends testbase py line in assertclickresultnoexception self assertisnone cli result exception join traceback format exception cli result exc info assertionerror systemexit is not none traceback most recent call last file home travis virtualenv lib site packages django db backends base base py line in ensure connection self connect file home travis virtualenv lib site packages django db backends base base py line in connect self connection self get new connection conn params file home travis virtualenv lib site packages django db backends postgresql base py line in get new connection connection database connect conn params file home travis virtualenv lib site packages init py line in connect conn connect dsn connection factory connection factory kwasync operationalerror fatal role user does not exist the above exception was the direct cause of the following exception traceback most recent call last file home travis build aiidateam aiida core aiida cmdline commands cmd setup py line in setup backend migrate file home travis build aiidateam aiida core aiida orm implementation django backend py line in migrate migrate database file home travis build aiidateam aiida core aiida backends djsite utils py line in migrate database call command migrate file home travis virtualenv lib site packages django core management init py line in call command return command execute args defaults file home travis virtualenv lib site packages django core management base py line in execute output self handle args options file home travis virtualenv lib site packages django core management commands migrate py line in handle executor migrationexecutor connection self migration progress callback file home travis virtualenv lib site packages django db migrations executor py line in init self loader migrationloader self connection file home travis virtualenv lib site packages django db migrations loader py line in init self build graph file home travis virtualenv lib site packages django db migrations loader py line in build graph self applied migrations recorder applied migrations file home travis virtualenv lib site packages django db migrations recorder py line in applied migrations self ensure schema file home travis virtualenv lib site packages django db migrations recorder py line in ensure schema if self migration meta db table in self connection introspection table names self connection cursor file home travis virtualenv lib site packages django db backends base base py line in cursor return self cursor file home travis virtualenv lib site packages django db backends base base py line in cursor self ensure connection file home travis virtualenv lib site packages django db backends base base py line in ensure connection self connect file home travis virtualenv lib site packages django db utils py line in exit six reraise dj exc type dj exc value traceback file home travis virtualenv lib site packages django utils six py line in reraise raise value with traceback tb file home travis virtualenv lib site packages django db backends base base py line in ensure connection self connect file home travis virtualenv lib site packages django db backends base base py line in connect self connection self get new connection conn params file home travis virtualenv lib site packages django db backends postgresql base py line in get new connection connection database connect conn params file home travis virtualenv lib site packages init py line in connect conn connect dsn connection factory connection factory kwasync django db utils operationalerror fatal role user does not exist during handling of the above exception another exception occurred traceback most recent call last file home travis virtualenv lib site packages click testing py line in invoke cli main args args or prog name prog name extra file home travis virtualenv lib site packages click core py line in main rv self invoke ctx file home travis virtualenv lib site packages click core py line in invoke return ctx invoke self callback ctx params file home travis virtualenv lib site packages click core py line in invoke return callback args kwargs file home travis virtualenv lib site packages click decorators py line in new func return f get current context args kwargs file home travis build aiidateam aiida core aiida cmdline commands cmd setup py line in quicksetup ctx invoke setup setup parameters file home travis virtualenv lib site packages click core py line in invoke return callback args kwargs file home travis build aiidateam aiida core aiida cmdline commands cmd setup py line in setup database migration failed probably because connection details are incorrect n format exception file home travis build aiidateam aiida core aiida cmdline utils echo py line in echo critical sys exit exitcode critical systemexit exitcode critical | 1 |
7,998 | 2,611,071,530 | IssuesEvent | 2015-02-27 00:33:23 | alistairreilly/andors-trail | https://api.github.com/repos/alistairreilly/andors-trail | opened | French translations update | auto-migrated Type-Defect | ```
Hi,
I have updated strings.xml and strings_about.xml in French.
They are in branch "french_translations" of repository
"https://code.google.com/r/marwaneka-andors-trail/". Alternately here is the
link if you want to pick the files directly:
https://code.google.com/r/marwaneka-andors-trail/source/browse?name=french_trans
lations#git%2FAndorsTrail%2Fres%2Fvalues-fr
Thanks
```
Original issue reported on code.google.com by `marwane...@gmail.com` on 21 Oct 2013 at 3:12 | 1.0 | French translations update - ```
Hi,
I have updated strings.xml and strings_about.xml in French.
They are in branch "french_translations" of repository
"https://code.google.com/r/marwaneka-andors-trail/". Alternately here is the
link if you want to pick the files directly:
https://code.google.com/r/marwaneka-andors-trail/source/browse?name=french_trans
lations#git%2FAndorsTrail%2Fres%2Fvalues-fr
Thanks
```
Original issue reported on code.google.com by `marwane...@gmail.com` on 21 Oct 2013 at 3:12 | non_test | french translations update hi i have updated strings xml and strings about xml in french they are in branch french translations of repository alternately here is the link if you want to pick the files directly lations git fr thanks original issue reported on code google com by marwane gmail com on oct at | 0 |
77,038 | 21,657,828,803 | IssuesEvent | 2022-05-06 15:46:40 | OpenLiberty/space-rover-mission | https://api.github.com/repos/OpenLiberty/space-rover-mission | opened | Create new video of Space Rover Mission Build 2 | documentation build #2 | Need to create a video of the gameplay of the Space Rover Mission Build 2
- The video should highlight the technologies used, as well as how to play the game. | 1.0 | Create new video of Space Rover Mission Build 2 - Need to create a video of the gameplay of the Space Rover Mission Build 2
- The video should highlight the technologies used, as well as how to play the game. | non_test | create new video of space rover mission build need to create a video of the gameplay of the space rover mission build the video should highlight the technologies used as well as how to play the game | 0 |
298,597 | 22,535,319,359 | IssuesEvent | 2022-06-25 05:56:54 | Rust-Data-Science/ulist | https://api.github.com/repos/Rust-Data-Science/ulist | closed | Fix and update developer guide. | documentation | The developer guide link is dead, and also the content is outdated. | 1.0 | Fix and update developer guide. - The developer guide link is dead, and also the content is outdated. | non_test | fix and update developer guide the developer guide link is dead and also the content is outdated | 0 |
131,457 | 12,485,062,190 | IssuesEvent | 2020-05-30 17:47:19 | spectrochempy/spectrochempy | https://api.github.com/repos/spectrochempy/spectrochempy | closed | Fix doc RST syntax | bug documentation | Author: @fernandezc (Christian Fernandez )
Redmine Issue: 4, https://redmine.spectrochempy.fr/issues/4
---
Some errors during building of the doc needs to be corrected before releasing 0.1.17
``` shell
looking for now-outdated files... none found
pickling environment... /Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:46: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:48: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:50: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:51: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:51: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:54: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:54: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:59: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:61: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:15: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:19: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.fft:52: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:51: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:104: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:118: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.read_matlab:24: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.set_complex:20: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:15: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:19: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.fft:52: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:51: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:104: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:118: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.read_matlab:24: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.set_complex:20: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:6: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:6: WARNING: Unexpected section title or transition.
=======
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:8: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:7: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:21: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:25: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/fft.py:docstring of spectrochempy.fft:10: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:9: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:62: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:76: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/cantera_utilities.py:docstring of spectrochempy.fit_to_concentrations:2: WARNING: Inline strong start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/readers/readjdx.py:docstring of spectrochempy.read_jdx:25: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.savgol_filter:64: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/index.rst:28: WARNING: Unexpected indentation.
done
checking consistency... done
preparing documents... done
writing output... [ 0%] api/generated/index
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/nmr/1_nmr.ipynb: WARNING: document isn't included in any toctree
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/nmr/2_isotope_database.ipynb: WARNING: document isn't included in any toctree
```
| 1.0 | Fix doc RST syntax - Author: @fernandezc (Christian Fernandez )
Redmine Issue: 4, https://redmine.spectrochempy.fr/issues/4
---
Some errors during building of the doc needs to be corrected before releasing 0.1.17
``` shell
looking for now-outdated files... none found
pickling environment... /Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:46: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:48: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:50: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:51: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:51: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:54: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:54: WARNING: Inline emphasis start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:59: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/mcrals.py:docstring of spectrochempy.MCRALS:61: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:15: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.diag:19: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.fft:52: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:51: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:104: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.find_peaks:118: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.read_matlab:24: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.set_complex:20: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/nddataset.py:docstring of spectrochempy.NDDataset.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:15: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.diag:19: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.fft:52: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:51: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:104: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.find_peaks:118: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.read_matlab:24: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.set_complex:20: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/dataset/ndpanel.py:docstring of spectrochempy.NDPanel.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:6: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:6: WARNING: Unexpected section title or transition.
=======
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/simplisma.py:docstring of spectrochempy.SIMPLISMA.reconstruct:8: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:6: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:7: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:21: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:22: WARNING: Unexpected section title.
Returns
-------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:25: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.detrend:26: WARNING: Unexpected section title.
Examples
--------
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/fft.py:docstring of spectrochempy.fft:10: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:8: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:9: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:61: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:62: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/peakfinding.py:docstring of spectrochempy.find_peaks:76: WARNING: Bullet list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/analysis/cantera_utilities.py:docstring of spectrochempy.fit_to_concentrations:2: WARNING: Inline strong start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/readers/readjdx.py:docstring of spectrochempy.read_jdx:25: WARNING: Inline literal start-string without end-string.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.savgol_filter:64: WARNING: Definition list ends without a blank line; unexpected unindent.
/Users/christian/Dropbox/SCP/spectrochempy/spectrochempy/core/processors/filter.py:docstring of spectrochempy.savgol_filter:67: WARNING: Unexpected indentation.
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/index.rst:28: WARNING: Unexpected indentation.
done
checking consistency... done
preparing documents... done
writing output... [ 0%] api/generated/index
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/nmr/1_nmr.ipynb: WARNING: document isn't included in any toctree
/Users/christian/Dropbox/SCP/spectrochempy/docs/user/tutorials/nmr/2_isotope_database.ipynb: WARNING: document isn't included in any toctree
```
| non_test | fix doc rst syntax author fernandezc christian fernandez redmine issue some errors during building of the doc needs to be corrected before releasing shell looking for now outdated files none found pickling environment users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning bullet list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning inline emphasis start string without end string users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning inline emphasis start string without end string users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning inline emphasis start string without end string users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning bullet list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis mcrals py docstring of spectrochempy mcrals warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset detrend warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset diag warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset diag warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset detrend warning unexpected section title returns users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset diag warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset detrend warning unexpected section title examples users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset fft warning inline interpreted text or phrase reference start string without end string users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset find peaks warning bullet list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset read matlab warning inline literal start string without end string users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset set complex warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset nddataset py docstring of spectrochempy nddataset savgol filter warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel detrend warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel diag warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel diag warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel detrend warning unexpected section title returns users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel diag warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel detrend warning unexpected section title examples users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel fft warning inline interpreted text or phrase reference start string without end string users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel find peaks warning bullet list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel read matlab warning inline literal start string without end string users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel set complex warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core dataset ndpanel py docstring of spectrochempy ndpanel savgol filter warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core analysis simplisma py docstring of spectrochempy simplisma reconstruct warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis simplisma py docstring of spectrochempy simplisma reconstruct warning unexpected section title or transition users christian dropbox scp spectrochempy spectrochempy core analysis simplisma py docstring of spectrochempy simplisma reconstruct warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning unexpected section title returns users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy detrend warning unexpected section title examples users christian dropbox scp spectrochempy spectrochempy core processors fft py docstring of spectrochempy fft warning inline interpreted text or phrase reference start string without end string users christian dropbox scp spectrochempy spectrochempy core analysis peakfinding py docstring of spectrochempy find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core analysis peakfinding py docstring of spectrochempy find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis peakfinding py docstring of spectrochempy find peaks warning unexpected indentation users christian dropbox scp spectrochempy spectrochempy core analysis peakfinding py docstring of spectrochempy find peaks warning block quote ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis peakfinding py docstring of spectrochempy find peaks warning bullet list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core analysis cantera utilities py docstring of spectrochempy fit to concentrations warning inline strong start string without end string users christian dropbox scp spectrochempy spectrochempy core readers readjdx py docstring of spectrochempy read jdx warning inline literal start string without end string users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy savgol filter warning definition list ends without a blank line unexpected unindent users christian dropbox scp spectrochempy spectrochempy core processors filter py docstring of spectrochempy savgol filter warning unexpected indentation users christian dropbox scp spectrochempy docs user tutorials index rst warning unexpected indentation done checking consistency done preparing documents done writing output api generated index users christian dropbox scp spectrochempy docs user tutorials nmr nmr ipynb warning document isn t included in any toctree users christian dropbox scp spectrochempy docs user tutorials nmr isotope database ipynb warning document isn t included in any toctree | 0 |
341,039 | 30,564,338,349 | IssuesEvent | 2023-07-20 16:36:14 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: splits/load/ycsb/e/nodes=3/obj=cpu failed | C-test-failure O-robot O-roachtest A-kv branch-master T-kv X-infra-flake | roachtest.splits/load/ycsb/e/nodes=3/obj=cpu [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10967169?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10967169?buildTab=artifacts#/splits/load/ycsb/e/nodes=3/obj=cpu) on master @ [06b5ba7888339ed53477ff73f9f3ae77a708ee17](https://github.com/cockroachdb/cockroach/commits/06b5ba7888339ed53477ff73f9f3ae77a708ee17):
```
(monitor.go:137).Wait: monitor failure: monitor task failed: 14 splits, expected between 3 and 13 splits (ranges 16 initial 2)
test artifacts and logs in: /artifacts/splits/load/ycsb/e/nodes=3/obj=cpu/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*splits/load/ycsb/e/nodes=3/obj=cpu.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-29916 | 2.0 | roachtest: splits/load/ycsb/e/nodes=3/obj=cpu failed - roachtest.splits/load/ycsb/e/nodes=3/obj=cpu [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10967169?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10967169?buildTab=artifacts#/splits/load/ycsb/e/nodes=3/obj=cpu) on master @ [06b5ba7888339ed53477ff73f9f3ae77a708ee17](https://github.com/cockroachdb/cockroach/commits/06b5ba7888339ed53477ff73f9f3ae77a708ee17):
```
(monitor.go:137).Wait: monitor failure: monitor task failed: 14 splits, expected between 3 and 13 splits (ranges 16 initial 2)
test artifacts and logs in: /artifacts/splits/load/ycsb/e/nodes=3/obj=cpu/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*splits/load/ycsb/e/nodes=3/obj=cpu.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-29916 | test | roachtest splits load ycsb e nodes obj cpu failed roachtest splits load ycsb e nodes obj cpu with on master monitor go wait monitor failure monitor task failed splits expected between and splits ranges initial test artifacts and logs in artifacts splits load ycsb e nodes obj cpu run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb kv triage jira issue crdb | 1 |
130,764 | 10,652,478,342 | IssuesEvent | 2019-10-17 12:42:21 | microsoft/appcenter | https://api.github.com/repos/microsoft/appcenter | reopened | UITest Android 10 cannot restart app or call backdoors after phone contact picker is opened | test | In Test Cloud, after opening the phone contact picker from the app that is tested, I need to restart the app because that is the only way UITest lets me exit the phone contact picker.
Restart works on iOS 10 - 12 and Android 6 - 9 when the contact picker is open, but not on Android 10 (Google Pixel 3a).
Please fix this - **or even better, provide a way to return to the app after system UI like the contact picker was opened**.
On Android 10 app restart fails, and backdoor methods also fail. All subsequent tests in the test run fail and the contact picker remains open:

A workaround is to ensure that this test is last in a test run. However that won't work when you have more than one test of this type.
When restarting the app, the UITest log says:
```
1) Error : Wsp.Anywhere365.Bridg.UITests.Tests.XDialerTests.CallContacts
System.Net.WebException : POST Failed
TearDown : System.Net.WebException : POST Failed
at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError (System.String method, System.Exception exception, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy) [0x0003c] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Http.HttpClient.SendData (System.String endpoint, System.String method, System.Net.Http.HttpContent content, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable`1[T] timeOut) [0x0013d] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Http.HttpClient.Post (System.String endpoint, System.String arguments, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable`1[T] timeOut) [0x00014] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.HttpApplicationStarter.Execute (System.String intentJson) [0x00035] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (System.String appPackageName, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x000a1] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (Xamarin.UITest.Shared.Android.ApkFile appApkFile, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x00007] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration, Xamarin.UITest.Shared.Execution.IExecutor executor) [0x00193] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration) [0x00000] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Configuration.AndroidAppConfigurator.StartApp (Xamarin.UITest.Configuration.AppDataMode appDataMode) [0x00017] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Wsp.Anywhere365.Bridg.UITests.AppInitializer.ConfigureAppForTestCloud (System.Boolean clearData) [0x00013] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.AppInitializer.StartApp (Xamarin.UITest.Platform platform, System.Boolean clearData, System.Boolean isTestSessionStart) [0x00017] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.Tests.BaseTests.RestartApp (System.Boolean clearData) [0x00001] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.Tests.XDialerTests.CallContacts () [0x00025] in <6d75cad6c27f4787b545d5589cde6739>:0
```
When calling a logging backdoor function, the device log shows `java.util.EmptyStackException at java.util.Stack.peek(Stack.java:102)`:
```
I/System.out(11890): URI: /backdoor
I/System.out(11890): params: {json={"method_name":"TraceOnDevice","arguments":["16-42-2019 02:42:32.324 +00:00 RestartApp: Restarting... at Bridg.UITests/Tests/BaseTests.cs:32"]}
I/System.out(11890): }
W/System.err(11890): java.util.EmptyStackException
W/System.err(11890): at java.util.Stack.peek(Stack.java:102)
W/System.err(11890): at sh.calaba.instrumentationbackend.CalabashInstrumentation.getLastActivity(CalabashInstrumentation.java:38)
W/System.err(11890): at sh.calaba.instrumentationbackend.entrypoint.ApplicationUnderTestInstrumentation.getCurrentActivity(ApplicationUnderTestInstrumentation.java:31)
W/System.err(11890): at sh.calaba.instrumentationbackend.entrypoint.ApplicationUnderTestInstrumentation.getApplication(ApplicationUnderTestInstrumentation.java:20)
W/System.err(11890): at sh.calaba.instrumentationbackend.automation.CalabashAutomationEmbedded.getCurrentApplication(CalabashAutomationEmbedded.java:27)
W/System.err(11890): at sh.calaba.instrumentationbackend.actions.HttpServer.serve(HttpServer.java:392)
W/System.err(11890): at sh.calaba.instrumentationbackend.actions.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:487)
W/System.err(11890): at java.lang.Thread.run(Thread.java:919)
``` | 1.0 | UITest Android 10 cannot restart app or call backdoors after phone contact picker is opened - In Test Cloud, after opening the phone contact picker from the app that is tested, I need to restart the app because that is the only way UITest lets me exit the phone contact picker.
Restart works on iOS 10 - 12 and Android 6 - 9 when the contact picker is open, but not on Android 10 (Google Pixel 3a).
Please fix this - **or even better, provide a way to return to the app after system UI like the contact picker was opened**.
On Android 10 app restart fails, and backdoor methods also fail. All subsequent tests in the test run fail and the contact picker remains open:

A workaround is to ensure that this test is last in a test run. However that won't work when you have more than one test of this type.
When restarting the app, the UITest log says:
```
1) Error : Wsp.Anywhere365.Bridg.UITests.Tests.XDialerTests.CallContacts
System.Net.WebException : POST Failed
TearDown : System.Net.WebException : POST Failed
at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError (System.String method, System.Exception exception, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy) [0x0003c] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Http.HttpClient.SendData (System.String endpoint, System.String method, System.Net.Http.HttpContent content, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable`1[T] timeOut) [0x0013d] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Http.HttpClient.Post (System.String endpoint, System.String arguments, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable`1[T] timeOut) [0x00014] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.HttpApplicationStarter.Execute (System.String intentJson) [0x00035] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (System.String appPackageName, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x000a1] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (Xamarin.UITest.Shared.Android.ApkFile appApkFile, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x00007] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration, Xamarin.UITest.Shared.Execution.IExecutor executor) [0x00193] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration) [0x00000] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Xamarin.UITest.Configuration.AndroidAppConfigurator.StartApp (Xamarin.UITest.Configuration.AppDataMode appDataMode) [0x00017] in <a0c3f09cbf9049cbb8d3a680a53dcf46>:0
at Wsp.Anywhere365.Bridg.UITests.AppInitializer.ConfigureAppForTestCloud (System.Boolean clearData) [0x00013] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.AppInitializer.StartApp (Xamarin.UITest.Platform platform, System.Boolean clearData, System.Boolean isTestSessionStart) [0x00017] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.Tests.BaseTests.RestartApp (System.Boolean clearData) [0x00001] in <6d75cad6c27f4787b545d5589cde6739>:0
at Wsp.Anywhere365.Bridg.UITests.Tests.XDialerTests.CallContacts () [0x00025] in <6d75cad6c27f4787b545d5589cde6739>:0
```
When calling a logging backdoor function, the device log shows `java.util.EmptyStackException at java.util.Stack.peek(Stack.java:102)`:
```
I/System.out(11890): URI: /backdoor
I/System.out(11890): params: {json={"method_name":"TraceOnDevice","arguments":["16-42-2019 02:42:32.324 +00:00 RestartApp: Restarting... at Bridg.UITests/Tests/BaseTests.cs:32"]}
I/System.out(11890): }
W/System.err(11890): java.util.EmptyStackException
W/System.err(11890): at java.util.Stack.peek(Stack.java:102)
W/System.err(11890): at sh.calaba.instrumentationbackend.CalabashInstrumentation.getLastActivity(CalabashInstrumentation.java:38)
W/System.err(11890): at sh.calaba.instrumentationbackend.entrypoint.ApplicationUnderTestInstrumentation.getCurrentActivity(ApplicationUnderTestInstrumentation.java:31)
W/System.err(11890): at sh.calaba.instrumentationbackend.entrypoint.ApplicationUnderTestInstrumentation.getApplication(ApplicationUnderTestInstrumentation.java:20)
W/System.err(11890): at sh.calaba.instrumentationbackend.automation.CalabashAutomationEmbedded.getCurrentApplication(CalabashAutomationEmbedded.java:27)
W/System.err(11890): at sh.calaba.instrumentationbackend.actions.HttpServer.serve(HttpServer.java:392)
W/System.err(11890): at sh.calaba.instrumentationbackend.actions.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:487)
W/System.err(11890): at java.lang.Thread.run(Thread.java:919)
``` | test | uitest android cannot restart app or call backdoors after phone contact picker is opened in test cloud after opening the phone contact picker from the app that is tested i need to restart the app because that is the only way uitest lets me exit the phone contact picker restart works on ios and android when the contact picker is open but not on android google pixel please fix this or even better provide a way to return to the app after system ui like the contact picker was opened on android app restart fails and backdoor methods also fail all subsequent tests in the test run fail and the contact picker remains open a workaround is to ensure that this test is last in a test run however that won t work when you have more than one test of this type when restarting the app the uitest log says error wsp bridg uitests tests xdialertests callcontacts system net webexception post failed teardown system net webexception post failed at xamarin uitest shared http httpclient handlehttperror system string method system exception exception xamarin uitest shared http exceptionpolicy exceptionpolicy in at xamarin uitest shared http httpclient senddata system string endpoint system string method system net http httpcontent content xamarin uitest shared http exceptionpolicy exceptionpolicy system nullable timeout in at xamarin uitest shared http httpclient post system string endpoint system string arguments xamarin uitest shared http exceptionpolicy exceptionpolicy system nullable timeout in at xamarin uitest shared android httpapplicationstarter execute system string intentjson in at xamarin uitest shared android androidapplifecycle launchapp system string apppackagename xamarin uitest shared android apkfile testserverapkfile system testserverport in at xamarin uitest shared android androidapplifecycle launchapp xamarin uitest shared android apkfile appapkfile xamarin uitest shared android apkfile testserverapkfile system testserverport in at xamarin uitest android androidapp ctor xamarin uitest configuration iandroidappconfiguration appconfiguration xamarin uitest shared execution iexecutor executor in at xamarin uitest android androidapp ctor xamarin uitest configuration iandroidappconfiguration appconfiguration in at xamarin uitest configuration androidappconfigurator startapp xamarin uitest configuration appdatamode appdatamode in at wsp bridg uitests appinitializer configureappfortestcloud system boolean cleardata in at wsp bridg uitests appinitializer startapp xamarin uitest platform platform system boolean cleardata system boolean istestsessionstart in at wsp bridg uitests tests basetests restartapp system boolean cleardata in at wsp bridg uitests tests xdialertests callcontacts in when calling a logging backdoor function the device log shows java util emptystackexception at java util stack peek stack java i system out uri backdoor i system out params json method name traceondevice arguments i system out w system err java util emptystackexception w system err at java util stack peek stack java w system err at sh calaba instrumentationbackend calabashinstrumentation getlastactivity calabashinstrumentation java w system err at sh calaba instrumentationbackend entrypoint applicationundertestinstrumentation getcurrentactivity applicationundertestinstrumentation java w system err at sh calaba instrumentationbackend entrypoint applicationundertestinstrumentation getapplication applicationundertestinstrumentation java w system err at sh calaba instrumentationbackend automation calabashautomationembedded getcurrentapplication calabashautomationembedded java w system err at sh calaba instrumentationbackend actions httpserver serve httpserver java w system err at sh calaba instrumentationbackend actions nanohttpd httpsession run nanohttpd java w system err at java lang thread run thread java | 1 |
22,527 | 15,241,499,042 | IssuesEvent | 2021-02-19 08:32:11 | ZcashFoundation/zebra | https://api.github.com/repos/ZcashFoundation/zebra | opened | Deploy mainnet nodes: resource is not ready | A-github-actions A-infrastructure C-bug I-integration-fail P-Medium S-needs-triage | ## Motivation
There was a single mainnet deployment failure with the error:
> The resource 'projects/zealous-zebra/regions/us-east1/instanceGroupManagers/zebrad-main' is not ready
This error happened when two PRs were merged less than a minute apart.
It seems like a rare situation, so I'm marking this `P-Medium`.
## Solution
We should skip the mainnet deploy job if it is already running.
## Logs
```
Run gcloud compute instance-groups managed rolling-action start-update \
ERROR: (gcloud.compute.instance-groups.managed.rolling-action.start-update) Could not fetch resource:
- The resource 'projects/zealous-zebra/regions/us-east1/instanceGroupManagers/zebrad-main' is not ready
``` | 1.0 | Deploy mainnet nodes: resource is not ready - ## Motivation
There was a single mainnet deployment failure with the error:
> The resource 'projects/zealous-zebra/regions/us-east1/instanceGroupManagers/zebrad-main' is not ready
This error happened when two PRs were merged less than a minute apart.
It seems like a rare situation, so I'm marking this `P-Medium`.
## Solution
We should skip the mainnet deploy job if it is already running.
## Logs
```
Run gcloud compute instance-groups managed rolling-action start-update \
ERROR: (gcloud.compute.instance-groups.managed.rolling-action.start-update) Could not fetch resource:
- The resource 'projects/zealous-zebra/regions/us-east1/instanceGroupManagers/zebrad-main' is not ready
``` | non_test | deploy mainnet nodes resource is not ready motivation there was a single mainnet deployment failure with the error the resource projects zealous zebra regions us instancegroupmanagers zebrad main is not ready this error happened when two prs were merged less than a minute apart it seems like a rare situation so i m marking this p medium solution we should skip the mainnet deploy job if it is already running logs run gcloud compute instance groups managed rolling action start update error gcloud compute instance groups managed rolling action start update could not fetch resource the resource projects zealous zebra regions us instancegroupmanagers zebrad main is not ready | 0 |
42,451 | 9,218,420,721 | IssuesEvent | 2019-03-11 13:22:02 | Women-in-Computing-at-RIT/poll_system_flutter_app | https://api.github.com/repos/Women-in-Computing-at-RIT/poll_system_flutter_app | opened | PastPollsPage.dart | Code Development | - [ ] Get past polls from poll parser and display
- [ ] Uses NavigationWidget.dart for tabs | 1.0 | PastPollsPage.dart - - [ ] Get past polls from poll parser and display
- [ ] Uses NavigationWidget.dart for tabs | non_test | pastpollspage dart get past polls from poll parser and display uses navigationwidget dart for tabs | 0 |
89,971 | 25,939,116,341 | IssuesEvent | 2022-12-16 16:38:03 | TrueBlocks/trueblocks-docker | https://api.github.com/repos/TrueBlocks/trueblocks-docker | closed | Choose consistent convention for tagging releases | enhancement TB-build | In the docker versions, we use `0.40.0-beta` for version tagging.
In the core repo, we use `v0.40.0-beta` for tagging.
I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it.
Choices:
1) leave them different
2) switch to `0.40.0-beta` for all repos
3) switch to `v0.40.0-beta` for all repos
| 1.0 | Choose consistent convention for tagging releases - In the docker versions, we use `0.40.0-beta` for version tagging.
In the core repo, we use `v0.40.0-beta` for tagging.
I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it.
Choices:
1) leave them different
2) switch to `0.40.0-beta` for all repos
3) switch to `v0.40.0-beta` for all repos
| non_test | choose consistent convention for tagging releases in the docker versions we use beta for version tagging in the core repo we use beta for tagging i prefer beta format with the v but it seems counter to the way docker does it choices leave them different switch to beta for all repos switch to beta for all repos | 0 |
36,158 | 6,517,298,188 | IssuesEvent | 2017-08-27 21:23:37 | storybooks/storybook | https://api.github.com/repos/storybooks/storybook | closed | Is there a way to get this working with Exponent? | app: react-native compatibility with other tools documentation | <a href="https://github.com/elie222"><img src="https://avatars2.githubusercontent.com/u/3090527?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [elie222](https://github.com/elie222)**
_Sunday Oct 16, 2016 at 12:58 GMT_
_Originally opened as https://github.com/storybooks/react-native-storybook/issues/105_
----
Hi,
I use Exponent to develop. Has anyone managed to get Storybook working with Exponent?
https://getexponent.com/
| 1.0 | Is there a way to get this working with Exponent? - <a href="https://github.com/elie222"><img src="https://avatars2.githubusercontent.com/u/3090527?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [elie222](https://github.com/elie222)**
_Sunday Oct 16, 2016 at 12:58 GMT_
_Originally opened as https://github.com/storybooks/react-native-storybook/issues/105_
----
Hi,
I use Exponent to develop. Has anyone managed to get Storybook working with Exponent?
https://getexponent.com/
| non_test | is there a way to get this working with exponent issue by sunday oct at gmt originally opened as hi i use exponent to develop has anyone managed to get storybook working with exponent | 0 |
33,767 | 4,860,197,512 | IssuesEvent | 2016-11-14 00:30:21 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | When client interface not shared with external, clients may not be able to connect. | area/appliance kind/bug kind/customer-found status/needs-regression-test | When we fix #2802 we need to also add the same fix for the client interface and gateway that we did for the management interface in #3081. | 1.0 | When client interface not shared with external, clients may not be able to connect. - When we fix #2802 we need to also add the same fix for the client interface and gateway that we did for the management interface in #3081. | test | when client interface not shared with external clients may not be able to connect when we fix we need to also add the same fix for the client interface and gateway that we did for the management interface in | 1 |
207,155 | 15,794,197,706 | IssuesEvent | 2021-04-02 10:28:08 | Realm667/WolfenDoom | https://api.github.com/repos/Realm667/WolfenDoom | closed | C3M4 zombie invasion is by far too easy | actor gameplay help wanted playtesting question | I am not sure what has been changed (if it is script- or map-wise) but the difficulty (played it on the middle skill) is by far too easy when it comes the zombie invasion city. There are too few zombies alive, and only normal zombies, no Braineaters or Nazi Officers get spawned, instead we only have the white naked ones and a few dogs. The map as it is now, is no real treat. Even the bosses at the end are an easy task if there is not enough cannon-fodder around.
I'm pointing @Talon1024 and @AFADoomer towards this, maybe something is bugged - or has been changed - or do I have to change the map somehow? It definitely needs to be harder, more of a challenge in the last 90 seconds where you feel overrun. | 1.0 | C3M4 zombie invasion is by far too easy - I am not sure what has been changed (if it is script- or map-wise) but the difficulty (played it on the middle skill) is by far too easy when it comes the zombie invasion city. There are too few zombies alive, and only normal zombies, no Braineaters or Nazi Officers get spawned, instead we only have the white naked ones and a few dogs. The map as it is now, is no real treat. Even the bosses at the end are an easy task if there is not enough cannon-fodder around.
I'm pointing @Talon1024 and @AFADoomer towards this, maybe something is bugged - or has been changed - or do I have to change the map somehow? It definitely needs to be harder, more of a challenge in the last 90 seconds where you feel overrun. | test | zombie invasion is by far too easy i am not sure what has been changed if it is script or map wise but the difficulty played it on the middle skill is by far too easy when it comes the zombie invasion city there are too few zombies alive and only normal zombies no braineaters or nazi officers get spawned instead we only have the white naked ones and a few dogs the map as it is now is no real treat even the bosses at the end are an easy task if there is not enough cannon fodder around i m pointing and afadoomer towards this maybe something is bugged or has been changed or do i have to change the map somehow it definitely needs to be harder more of a challenge in the last seconds where you feel overrun | 1 |
447,948 | 31,757,842,589 | IssuesEvent | 2023-09-12 01:05:23 | flyteorg/flyte | https://api.github.com/repos/flyteorg/flyte | closed | [Docs] Render flytekit.configuration.FastSerializationSettings as a clickable entity | documentation stale | ### Description
[Here](https://docs.flyte.org/projects/flytekit/en/latest/generated/flytekit.configuration.SerializationSettings.html#flytekit.configuration.SerializationSettings), render the class definition ``FastSerializationSettings`` as rendered in the flytekit repo.
### Are you sure this issue hasn't been raised already?
- [X] Yes
### Have you read the Code of Conduct?
- [X] Yes | 1.0 | [Docs] Render flytekit.configuration.FastSerializationSettings as a clickable entity - ### Description
[Here](https://docs.flyte.org/projects/flytekit/en/latest/generated/flytekit.configuration.SerializationSettings.html#flytekit.configuration.SerializationSettings), render the class definition ``FastSerializationSettings`` as rendered in the flytekit repo.
### Are you sure this issue hasn't been raised already?
- [X] Yes
### Have you read the Code of Conduct?
- [X] Yes | non_test | render flytekit configuration fastserializationsettings as a clickable entity description render the class definition fastserializationsettings as rendered in the flytekit repo are you sure this issue hasn t been raised already yes have you read the code of conduct yes | 0 |
214,001 | 24,023,426,064 | IssuesEvent | 2022-09-15 09:30:14 | sast-automation-dev/railsgoat-25 | https://api.github.com/repos/sast-automation-dev/railsgoat-25 | opened | moment-1.7.2.min.js: 3 vulnerabilities (highest severity is: 7.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-1.7.2.min.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2017-18214](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-1.7.2.min.js | Direct | 2.19.3 | ❌ |
| [CVE-2016-4055](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4055) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | moment-1.7.2.min.js | Direct | 2.11.2 | ❌ |
| [WS-2016-0075](https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | moment-1.7.2.min.js | Direct | 2.15.2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18214</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055.
<p>Publish Date: 2018-03-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p>
<p>Release Date: 2018-03-04</p>
<p>Fix Resolution: 2.19.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2016-4055</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The duration function in the moment package before 2.11.2 for Node.js allows remote attackers to cause a denial of service (CPU consumption) via a long string, aka a "regular expression Denial of Service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4055>CVE-2016-4055</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-4055">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-4055</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: 2.11.2</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2016-0075</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method.
<p>Publish Date: 2016-10-24
<p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2016-10-24</p>
<p>Fix Resolution: 2.15.2</p>
</p>
<p></p>
</details> | True | moment-1.7.2.min.js: 3 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-1.7.2.min.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2017-18214](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-1.7.2.min.js | Direct | 2.19.3 | ❌ |
| [CVE-2016-4055](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4055) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | moment-1.7.2.min.js | Direct | 2.11.2 | ❌ |
| [WS-2016-0075](https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | moment-1.7.2.min.js | Direct | 2.15.2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18214</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055.
<p>Publish Date: 2018-03-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p>
<p>Release Date: 2018-03-04</p>
<p>Fix Resolution: 2.19.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2016-4055</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The duration function in the moment package before 2.11.2 for Node.js allows remote attackers to cause a denial of service (CPU consumption) via a long string, aka a "regular expression Denial of Service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4055>CVE-2016-4055</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-4055">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-4055</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: 2.11.2</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2016-0075</summary>
### Vulnerable Library - <b>moment-1.7.2.min.js</b></p>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/1.7.2/moment.min.js</a></p>
<p>Path to vulnerable library: /app/assets/javascripts/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/railsgoat-25/commit/91cbf9a3a616837cb3b39eec1d792b10343b9b08">91cbf9a3a616837cb3b39eec1d792b10343b9b08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method.
<p>Publish Date: 2016-10-24
<p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2016-10-24</p>
<p>Fix Resolution: 2.15.2</p>
</p>
<p></p>
</details> | non_test | moment min js vulnerabilities highest severity is vulnerable library moment min js parse validate manipulate and display dates library home page a href path to vulnerable library app assets javascripts moment js found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high moment min js direct medium moment min js direct medium moment min js direct details cve vulnerable library moment min js parse validate manipulate and display dates library home page a href path to vulnerable library app assets javascripts moment js dependency hierarchy x moment min js vulnerable library found in head commit a href found in base branch master vulnerability details the moment module before for node js is prone to a regular expression denial of service via a crafted date string a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cve vulnerable library moment min js parse validate manipulate and display dates library home page a href path to vulnerable library app assets javascripts moment js dependency hierarchy x moment min js vulnerable library found in head commit a href found in base branch master vulnerability details the duration function in the moment package before for node js allows remote attackers to cause a denial of service cpu consumption via a long string aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws vulnerable library moment min js parse validate manipulate and display dates library home page a href path to vulnerable library app assets javascripts moment js dependency hierarchy x moment min js vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service vulnerability in the moment package by using a specific characters long string in the format method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution | 0 |
209,338 | 16,016,675,793 | IssuesEvent | 2021-04-20 16:52:31 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Tests Flake due to insufficient memory | area/testing kind/failing-test kind/flake priority/important-soon | with insufficient memory, all different weird kind of issues happen, seen several tests frequently failing b/c of that, and containerd timeouts are one of them
example:
https://storage.googleapis.com/minikube-builds/logs/11006/74537ca/KVM_Linux_containerd.html#fail_TestForceSystemdFlag
https://storage.googleapis.com/minikube-builds/logs/11006/74537ca/KVM_Linux_containerd.html#fail_TestForceSystemdEnv
> โฏ minikube start -p memless --memory=1800 --force-systemd --alsologtostderr -v=9 --driver=kvm2 --container-runtime=containerd
```
...
I0408 23:23:42.109960 32694 crio.go:119] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
...
W0408 23:25:18.298509 32694 out.go:222] โ Exiting due to RUNTIME_ENABLE: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
...
```
inside the vm:
```
# crictl images --output json
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0x11121fe, 0x16)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/panic.go:1116 +0x72 fp=0x7fff8c7be3d0 sp=0x7fff8c7be3a0 pc=0x435bb2
runtime.sysMap(0xc000000000, 0x4000000, 0x1917418)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mem_linux.go:169 +0xc6 fp=0x7fff8c7be410 sp=0x7fff8c7be3d0 pc=0x418f86
runtime.(*mheap).sysAlloc(0x18fc580, 0x400000, 0x0, 0x4)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:727 +0x1e5 fp=0x7fff8c7be4b8 sp=0x7fff8c7be410 pc=0x40c925
runtime.(*mheap).grow(0x18fc580, 0x1, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:1344 +0x85 fp=0x7fff8c7be520 sp=0x7fff8c7be4b8 pc=0x4284a5
runtime.(*mheap).allocSpan(0x18fc580, 0x1, 0x656c6d656d002a00, 0x1917428, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:1160 +0x6b6 fp=0x7fff8c7be5a0 sp=0x7fff8c7be520 pc=0x428256
runtime.(*mheap).alloc.func1()
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:907 +0x65 fp=0x7fff8c7be5f8 sp=0x7fff8c7be5a0 pc=0x461b65
runtime.(*mheap).alloc(0x18fc580, 0x1, 0x12a, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:901 +0x85 fp=0x7fff8c7be648 sp=0x7fff8c7be5f8 pc=0x427725
runtime.(*mcentral).grow(0x190f438, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcentral.go:506 +0x7a fp=0x7fff8c7be690 sp=0x7fff8c7be648 pc=0x41895a
runtime.(*mcentral).cacheSpan(0x190f438, 0x40000)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcentral.go:177 +0x3e5 fp=0x7fff8c7be708 sp=0x7fff8c7be690 pc=0x4186e5
runtime.(*mcache).refill(0x7f501fdc6108, 0x2a)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcache.go:142 +0xa5 fp=0x7fff8c7be728 sp=0x7fff8c7be708 pc=0x418085
runtime.(*mcache).nextFree(0x7f501fdc6108, 0x18e482a, 0x7f501fdc6108, 0x7f4ffb4a6000, 0x7fff8c7be7b8)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:880 +0x8d fp=0x7fff8c7be760 sp=0x7fff8c7be728 pc=0x40d1ad
runtime.mallocgc(0x180, 0x10fbec0, 0x7fff8c7be801, 0x7fff8c7be860)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:1061 +0x834 fp=0x7fff8c7be800 sp=0x7fff8c7be760 pc=0x40db94
runtime.newobject(0x10fbec0, 0x4608a0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:1195 +0x38 fp=0x7fff8c7be830 sp=0x7fff8c7be800 pc=0x40e038
runtime.malg(0x8000, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:3493 +0x31 fp=0x7fff8c7be870 sp=0x7fff8c7be830 pc=0x440751
runtime.mpreinit(0x18e4820)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/os_linux.go:340 +0x29 fp=0x7fff8c7be890 sp=0x7fff8c7be870 pc=0x432969
runtime.mcommoninit(0x18e4820, 0xffffffffffffffff)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:663 +0xf7 fp=0x7fff8c7be8d8 sp=0x7fff8c7be890 pc=0x4399b7
runtime.schedinit()
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:565 +0xa5 fp=0x7fff8c7be930 sp=0x7fff8c7be8d8 pc=0x439545
runtime.rt0_go(0x7fff8c7be968, 0x4, 0x7fff8c7be968, 0x0, 0x0, 0x4, 0x7fff8c7beefe, 0x7fff8c7bef05, 0x7fff8c7bef0c, 0x7fff8c7bef15, ...)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/asm_amd64.s:214 +0x125 fp=0x7fff8c7be938 sp=0x7fff8c7be930 pc=0x468fc5
```
```
# free
total used free shared buff/cache available
Mem: 1782984 129504 80684 1551104 1572796 17924
Swap: 0 0 0
```
/assign | 2.0 | Tests Flake due to insufficient memory - with insufficient memory, all different weird kind of issues happen, seen several tests frequently failing b/c of that, and containerd timeouts are one of them
example:
https://storage.googleapis.com/minikube-builds/logs/11006/74537ca/KVM_Linux_containerd.html#fail_TestForceSystemdFlag
https://storage.googleapis.com/minikube-builds/logs/11006/74537ca/KVM_Linux_containerd.html#fail_TestForceSystemdEnv
> โฏ minikube start -p memless --memory=1800 --force-systemd --alsologtostderr -v=9 --driver=kvm2 --container-runtime=containerd
```
...
I0408 23:23:42.109960 32694 crio.go:119] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
...
W0408 23:25:18.298509 32694 out.go:222] โ Exiting due to RUNTIME_ENABLE: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
...
```
inside the vm:
```
# crictl images --output json
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0x11121fe, 0x16)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/panic.go:1116 +0x72 fp=0x7fff8c7be3d0 sp=0x7fff8c7be3a0 pc=0x435bb2
runtime.sysMap(0xc000000000, 0x4000000, 0x1917418)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mem_linux.go:169 +0xc6 fp=0x7fff8c7be410 sp=0x7fff8c7be3d0 pc=0x418f86
runtime.(*mheap).sysAlloc(0x18fc580, 0x400000, 0x0, 0x4)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:727 +0x1e5 fp=0x7fff8c7be4b8 sp=0x7fff8c7be410 pc=0x40c925
runtime.(*mheap).grow(0x18fc580, 0x1, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:1344 +0x85 fp=0x7fff8c7be520 sp=0x7fff8c7be4b8 pc=0x4284a5
runtime.(*mheap).allocSpan(0x18fc580, 0x1, 0x656c6d656d002a00, 0x1917428, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:1160 +0x6b6 fp=0x7fff8c7be5a0 sp=0x7fff8c7be520 pc=0x428256
runtime.(*mheap).alloc.func1()
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:907 +0x65 fp=0x7fff8c7be5f8 sp=0x7fff8c7be5a0 pc=0x461b65
runtime.(*mheap).alloc(0x18fc580, 0x1, 0x12a, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mheap.go:901 +0x85 fp=0x7fff8c7be648 sp=0x7fff8c7be5f8 pc=0x427725
runtime.(*mcentral).grow(0x190f438, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcentral.go:506 +0x7a fp=0x7fff8c7be690 sp=0x7fff8c7be648 pc=0x41895a
runtime.(*mcentral).cacheSpan(0x190f438, 0x40000)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcentral.go:177 +0x3e5 fp=0x7fff8c7be708 sp=0x7fff8c7be690 pc=0x4186e5
runtime.(*mcache).refill(0x7f501fdc6108, 0x2a)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/mcache.go:142 +0xa5 fp=0x7fff8c7be728 sp=0x7fff8c7be708 pc=0x418085
runtime.(*mcache).nextFree(0x7f501fdc6108, 0x18e482a, 0x7f501fdc6108, 0x7f4ffb4a6000, 0x7fff8c7be7b8)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:880 +0x8d fp=0x7fff8c7be760 sp=0x7fff8c7be728 pc=0x40d1ad
runtime.mallocgc(0x180, 0x10fbec0, 0x7fff8c7be801, 0x7fff8c7be860)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:1061 +0x834 fp=0x7fff8c7be800 sp=0x7fff8c7be760 pc=0x40db94
runtime.newobject(0x10fbec0, 0x4608a0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/malloc.go:1195 +0x38 fp=0x7fff8c7be830 sp=0x7fff8c7be800 pc=0x40e038
runtime.malg(0x8000, 0x0)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:3493 +0x31 fp=0x7fff8c7be870 sp=0x7fff8c7be830 pc=0x440751
runtime.mpreinit(0x18e4820)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/os_linux.go:340 +0x29 fp=0x7fff8c7be890 sp=0x7fff8c7be870 pc=0x432969
runtime.mcommoninit(0x18e4820, 0xffffffffffffffff)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:663 +0xf7 fp=0x7fff8c7be8d8 sp=0x7fff8c7be890 pc=0x4399b7
runtime.schedinit()
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/proc.go:565 +0xa5 fp=0x7fff8c7be930 sp=0x7fff8c7be8d8 pc=0x439545
runtime.rt0_go(0x7fff8c7be968, 0x4, 0x7fff8c7be968, 0x0, 0x0, 0x4, 0x7fff8c7beefe, 0x7fff8c7bef05, 0x7fff8c7bef0c, 0x7fff8c7bef15, ...)
/nix/store/r5wsnvxfc14badlg8fcx4fyvji3lksz6-go-1.15/share/go/src/runtime/asm_amd64.s:214 +0x125 fp=0x7fff8c7be938 sp=0x7fff8c7be930 pc=0x468fc5
```
```
# free
total used free shared buff/cache available
Mem: 1782984 129504 80684 1551104 1572796 17924
Swap: 0 0 0
```
/assign | test | tests flake due to insufficient memory with insufficient memory all different weird kind of issues happen seen several tests frequently failing b c of that and containerd timeouts are one of them example โฏ minikube start p memless memory force systemd alsologtostderr v driver container runtime containerd crio go couldn t verify netfilter by sudo sysctl net bridge bridge nf call iptables which might be okay error sudo sysctl net bridge bridge nf call iptables process exited with status out go โ exiting due to runtime enable stat run containerd containerd sock process exited with status stdout stderr stat cannot stat run containerd containerd sock no such file or directory inside the vm crictl images output json fatal error runtime out of memory runtime stack runtime throw nix store go share go src runtime panic go fp sp pc runtime sysmap nix store go share go src runtime mem linux go fp sp pc runtime mheap sysalloc nix store go share go src runtime malloc go fp sp pc runtime mheap grow nix store go share go src runtime mheap go fp sp pc runtime mheap allocspan nix store go share go src runtime mheap go fp sp pc runtime mheap alloc nix store go share go src runtime mheap go fp sp pc runtime mheap alloc nix store go share go src runtime mheap go fp sp pc runtime mcentral grow nix store go share go src runtime mcentral go fp sp pc runtime mcentral cachespan nix store go share go src runtime mcentral go fp sp pc runtime mcache refill nix store go share go src runtime mcache go fp sp pc runtime mcache nextfree nix store go share go src runtime malloc go fp sp pc runtime mallocgc nix store go share go src runtime malloc go fp sp pc runtime newobject nix store go share go src runtime malloc go fp sp pc runtime malg nix store go share go src runtime proc go fp sp pc runtime mpreinit nix store go share go src runtime os linux go fp sp pc runtime mcommoninit nix store go share go src runtime proc go fp sp pc runtime schedinit nix store go share go src runtime proc go fp sp pc runtime go nix store go share go src runtime asm s fp sp pc free total used free shared buff cache available mem swap assign | 1 |
103,146 | 8,881,079,388 | IssuesEvent | 2019-01-14 09:02:10 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Tests for Table Workspace Display | Quality: Unit Tests | Presenter tests have to be added. The file `test_tableworkspacedisplay_presenter.py` is created, and set up with CMake to run the tests.
### TODO
The #24324 PR is still missing some things:
- Scatter + Error plot (if needed)
- Editable column names
- Double clicking columns -> edit column options
- Tests for presenter of TableWorkspaceDisplay
| 1.0 | Tests for Table Workspace Display - Presenter tests have to be added. The file `test_tableworkspacedisplay_presenter.py` is created, and set up with CMake to run the tests.
### TODO
The #24324 PR is still missing some things:
- Scatter + Error plot (if needed)
- Editable column names
- Double clicking columns -> edit column options
- Tests for presenter of TableWorkspaceDisplay
| test | tests for table workspace display presenter tests have to be added the file test tableworkspacedisplay presenter py is created and set up with cmake to run the tests todo the pr is still missing some things scatter error plot if needed editable column names double clicking columns edit column options tests for presenter of tableworkspacedisplay | 1 |
249,972 | 7,966,161,605 | IssuesEvent | 2018-07-14 18:18:48 | trellis-ldp/trellis | https://api.github.com/repos/trellis-ldp/trellis | closed | Support multiple range request segments | area/http enhancement priority/low | The HTTP specification on [range requests](https://tools.ietf.org/html/rfc7233) allows for multiple, non-contiguous segments in a request. E.g.
```http
Range: bytes=1-100,301-400
```
The Trellis HTTP layer currently only supports a single range (i.e. in the header parsing logic) though the underlying APIs could support an arbitrary number of ranges. It could be useful to support range requests that include multiple ranges. | 1.0 | Support multiple range request segments - The HTTP specification on [range requests](https://tools.ietf.org/html/rfc7233) allows for multiple, non-contiguous segments in a request. E.g.
```http
Range: bytes=1-100,301-400
```
The Trellis HTTP layer currently only supports a single range (i.e. in the header parsing logic) though the underlying APIs could support an arbitrary number of ranges. It could be useful to support range requests that include multiple ranges. | non_test | support multiple range request segments the http specification on allows for multiple non contiguous segments in a request e g http range bytes the trellis http layer currently only supports a single range i e in the header parsing logic though the underlying apis could support an arbitrary number of ranges it could be useful to support range requests that include multiple ranges | 0 |
671,185 | 22,747,163,733 | IssuesEvent | 2022-07-07 10:09:27 | paratestphp/paratest | https://api.github.com/repos/paratestphp/paratest | closed | Feature request: Support phpunit's --testdox option | enhancement priority backlog | Feature request: Support phpunit's --testdox option
It will be nice to see the info about the current test(s) which are bieng run. | 1.0 | Feature request: Support phpunit's --testdox option - Feature request: Support phpunit's --testdox option
It will be nice to see the info about the current test(s) which are bieng run. | non_test | feature request support phpunit s testdox option feature request support phpunit s testdox option it will be nice to see the info about the current test s which are bieng run | 0 |
72,468 | 7,298,829,082 | IssuesEvent | 2018-02-26 18:09:40 | IBVSoftware/CollaBoard-Feedback | https://api.github.com/repos/IBVSoftware/CollaBoard-Feedback | closed | Upload files cause the 15 MB error message | To be tested in Zurich bug | v 2.108
HP Spectre
Everytime I try to upload a PDF or a jpg, I always get the error message "One or more files exceeded the maximum allowed size of 15 Mb". I tried loading older projects, create new projects from a template and create new projects and it always happens.
Gian suggested @corradocavalli to debug this machine when you'll be back in the office | 1.0 | Upload files cause the 15 MB error message - v 2.108
HP Spectre
Everytime I try to upload a PDF or a jpg, I always get the error message "One or more files exceeded the maximum allowed size of 15 Mb". I tried loading older projects, create new projects from a template and create new projects and it always happens.
Gian suggested @corradocavalli to debug this machine when you'll be back in the office | test | upload files cause the mb error message v hp spectre everytime i try to upload a pdf or a jpg i always get the error message one or more files exceeded the maximum allowed size of mb i tried loading older projects create new projects from a template and create new projects and it always happens gian suggested corradocavalli to debug this machine when you ll be back in the office | 1 |
55,279 | 6,468,248,527 | IssuesEvent | 2017-08-17 00:17:06 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Networking does not come up on RHEL 7.4 | area/networking kind/bug status/resolved status/to-test | **Rancher versions:**
rancher/server: v1.6.6
rancher/agent: v1.2.5
**Infrastructure Stack versions:**
healthcheck: v0.3.0
ipsec: v0.1.2
network-services: v0.2.4
scheduler: v0.6.2
kubernetes (if applicable): n/a
**Docker version: (`docker version`,`docker info` preferred)**
Tested on upstream Docker 1.12.6, 17.03.2-ce and 17.06.0-ce
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
RHEL 7.4 (AMI: https://access.redhat.com/articles/3135091)
`3.10.0-693.el7.x86_64`
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
AWS
**Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)**
Single node
**Environment Template: (Cattle/Kubernetes/Swarm/Mesos)**
Cattle
**Steps to Reproduce:**
1. Add RHEL 7.4 host to Rancher
**Results:**
network-manager & cni-driver start, but the rest will output `Time-out getting IP address`
| 1.0 | Networking does not come up on RHEL 7.4 - **Rancher versions:**
rancher/server: v1.6.6
rancher/agent: v1.2.5
**Infrastructure Stack versions:**
healthcheck: v0.3.0
ipsec: v0.1.2
network-services: v0.2.4
scheduler: v0.6.2
kubernetes (if applicable): n/a
**Docker version: (`docker version`,`docker info` preferred)**
Tested on upstream Docker 1.12.6, 17.03.2-ce and 17.06.0-ce
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
RHEL 7.4 (AMI: https://access.redhat.com/articles/3135091)
`3.10.0-693.el7.x86_64`
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
AWS
**Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)**
Single node
**Environment Template: (Cattle/Kubernetes/Swarm/Mesos)**
Cattle
**Steps to Reproduce:**
1. Add RHEL 7.4 host to Rancher
**Results:**
network-manager & cni-driver start, but the rest will output `Time-out getting IP address`
| test | networking does not come up on rhel rancher versions rancher server rancher agent infrastructure stack versions healthcheck ipsec network services scheduler kubernetes if applicable n a docker version docker version docker info preferred tested on upstream docker ce and ce operating system and kernel cat etc os release uname r preferred rhel ami type provider of hosts virtualbox bare metal aws gce do aws setup details single node rancher vs ha rancher internal db vs external db single node environment template cattle kubernetes swarm mesos cattle steps to reproduce add rhel host to rancher results network manager cni driver start but the rest will output time out getting ip address | 1 |
106,642 | 23,261,588,546 | IssuesEvent | 2022-08-04 13:55:14 | arduino/arduino-ide | https://api.github.com/repos/arduino/arduino-ide | closed | `root ERROR Error: ENOENT: no such file or directory, stat 'undefined/'` | conclusion: duplicate topic: code os: linux type: imperfection | ```
root ERROR Uncaught Exception:
root ERROR Error: ENOENT: no such file or directory, stat 'undefined/'
at statSync (fs.js:915:3)
at Object.fs.statSync (electron/js2c/asar.js:278:27)
at mkDir (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:52:17)
at prepareDir (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:31:9)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:8:19)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-file-name.js:5:36)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/transport-file.js:15:52)
at log (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/index.js:20:36)
at Object.<anonymous> (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/src-gen/backend/main.js:11:9)
at Object.log (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/@theia/core/lib/common/logger-protocol.js:141:17)
```
I ran arduino-pro-ide. funny thing is I cannot `strace` arduino-pro-ide. System says cannot stat file.
Arduino pro gui comes up but I keep getting errors shown about. I can be root on this system, have tried both root acct and non-root account to run arduino-pro-ide | 1.0 | `root ERROR Error: ENOENT: no such file or directory, stat 'undefined/'` - ```
root ERROR Uncaught Exception:
root ERROR Error: ENOENT: no such file or directory, stat 'undefined/'
at statSync (fs.js:915:3)
at Object.fs.statSync (electron/js2c/asar.js:278:27)
at mkDir (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:52:17)
at prepareDir (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:31:9)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-path.js:8:19)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/utils/find-log-file-name.js:5:36)
at Object.default_1 [as default] (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/transport-file.js:15:52)
at log (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/node-log-rotate/lib/index.js:20:36)
at Object.<anonymous> (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/src-gen/backend/main.js:11:9)
at Object.log (/spare/downloads/arduinoProIDE/arduino-pro-ide_0.1.3_Linux_64bit/resources/app/node_modules/@theia/core/lib/common/logger-protocol.js:141:17)
```
I ran arduino-pro-ide. funny thing is I cannot `strace` arduino-pro-ide. System says cannot stat file.
Arduino pro gui comes up but I keep getting errors shown about. I can be root on this system, have tried both root acct and non-root account to run arduino-pro-ide | non_test | root error error enoent no such file or directory stat undefined root error uncaught exception root error error enoent no such file or directory stat undefined at statsync fs js at object fs statsync electron asar js at mkdir spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib utils find log path js at preparedir spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib utils find log path js at object default spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib utils find log path js at object default spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib utils find log file name js at object default spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib transport file js at log spare downloads arduinoproide arduino pro ide linux resources app node modules node log rotate lib index js at object spare downloads arduinoproide arduino pro ide linux resources app src gen backend main js at object log spare downloads arduinoproide arduino pro ide linux resources app node modules theia core lib common logger protocol js i ran arduino pro ide funny thing is i cannot strace arduino pro ide system says cannot stat file arduino pro gui comes up but i keep getting errors shown about i can be root on this system have tried both root acct and non root account to run arduino pro ide | 0 |
37,717 | 8,488,256,596 | IssuesEvent | 2018-10-26 16:05:01 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | opened | Heat exchanger effectiveness applied twice to stratified tanks | Defect | Issue overview
--------------
For both the source and use side, the heat exchanger effectiveness is [applied to the flow rates](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8464-L8465) in `CalcNodeMassFlows` and then [saved onto the tank node for the inlets](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8519-L8520). Back in `CalcWaterThermalTankStratified` they are [applied again](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8197-L8198) which shouldn't be done.
### Details
Some additional details for this issue (if relevant):
- Version of EnergyPlus: `develop` branch, sha `c1a8cbbf47edf480e3c9e680a32d310591a11b3c`
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
@jmaguire1 | 1.0 | Heat exchanger effectiveness applied twice to stratified tanks - Issue overview
--------------
For both the source and use side, the heat exchanger effectiveness is [applied to the flow rates](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8464-L8465) in `CalcNodeMassFlows` and then [saved onto the tank node for the inlets](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8519-L8520). Back in `CalcWaterThermalTankStratified` they are [applied again](https://github.com/NREL/EnergyPlus/blob/c1a8cbbf47edf480e3c9e680a32d310591a11b3c/src/EnergyPlus/WaterThermalTanks.cc#L8197-L8198) which shouldn't be done.
### Details
Some additional details for this issue (if relevant):
- Version of EnergyPlus: `develop` branch, sha `c1a8cbbf47edf480e3c9e680a32d310591a11b3c`
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
@jmaguire1 | non_test | heat exchanger effectiveness applied twice to stratified tanks issue overview for both the source and use side the heat exchanger effectiveness is in calcnodemassflows and then back in calcwaterthermaltankstratified they are which shouldn t be done details some additional details for this issue if relevant version of energyplus develop branch sha checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 0 |
231,839 | 18,798,531,349 | IssuesEvent | 2021-11-09 02:51:44 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | closed | There is no slash next to the URL/Path for one folder under file shares | ๐งช testing :gear: files :beetle: regression | **Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211106.4
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 18.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Create a file share -> Create a folder.
3. Right click the folder -> Click 'Copy Path' -> Paste the path to the clipboard
4. Check whether the path ends with a slash.
## Expected Experience ##
The path ends with a slash.
## Actual Experience ##
The path doesn't end with a slash. | 1.0 | There is no slash next to the URL/Path for one folder under file shares - **Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211106.4
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 18.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Create a file share -> Create a folder.
3. Right click the folder -> Click 'Copy Path' -> Paste the path to the clipboard
4. Check whether the path ends with a slash.
## Expected Experience ##
The path ends with a slash.
## Actual Experience ##
The path doesn't end with a slash. | test | there is no slash next to the url path for one folder under file shares storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey architecture how found from running test cases regression from previous release steps to reproduce expand one storage account file shares create a file share create a folder right click the folder click copy path paste the path to the clipboard check whether the path ends with a slash expected experience the path ends with a slash actual experience the path doesn t end with a slash | 1 |
52,368 | 10,827,532,084 | IssuesEvent | 2019-11-10 10:00:04 | micuintus/harbour-Berlin-Vegan | https://api.github.com/repos/micuintus/harbour-Berlin-Vegan | closed | [FELGO] Cannot read property of undefined | bug mainly code related issue | ```
qrc:/imports/BerlinVegan/components/generic/VenueDescriptionHeader.qml:116: TypeError: Cannot read property 'width' of undefined
qrc:/imports/BerlinVegan/components/generic/VenueDescriptionHeader.qml:113: TypeError: Cannot read property 'x' of undefined
``` | 1.0 | [FELGO] Cannot read property of undefined - ```
qrc:/imports/BerlinVegan/components/generic/VenueDescriptionHeader.qml:116: TypeError: Cannot read property 'width' of undefined
qrc:/imports/BerlinVegan/components/generic/VenueDescriptionHeader.qml:113: TypeError: Cannot read property 'x' of undefined
``` | non_test | cannot read property of undefined qrc imports berlinvegan components generic venuedescriptionheader qml typeerror cannot read property width of undefined qrc imports berlinvegan components generic venuedescriptionheader qml typeerror cannot read property x of undefined | 0 |
231,914 | 18,819,951,659 | IssuesEvent | 2021-11-10 06:51:33 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | opened | Upgrade test fails - skr-kyma-to-kyma2-upgrade-aws-dev | bug area/ci test-failing area/control-plane |
**Description**
Job https://status.build.kyma-project.io/job-history/gs/kyma-prow-logs/logs/skr-kyma-to-kyma2-upgrade-aws-dev fails:
```
1) SKR-Upgrade-test
Perform Upgrade:
Error: failed during upgradeKyma
at KCPWrapper.upgradeKyma (kcp/client.js:117:19)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Context.<anonymous> (skr-kyma-to-kyma2-upgrade/index.js:196:28)
```
Upgrade fails with timeout and reports no operation in the orchestration:
```
OrchestrationID: 2231f7ec-7a36-4232-b07e-6a8ccc440181
Waiting for Kyma Upgrade with OrchestrationID 2231f7ec-7a36-4232-b07e-6a8ccc440181 to succeed...
OrchestrationID: 2231f7ec-7a36-4232-b07e-6a8ccc440181 (upgradeKyma to version "2.0.0-rc4"), status: failed
No operations in OrchestrationID 2231f7ec-7a36-4232-b07e-6a8ccc440181
....
wait timeout
```
| 1.0 | Upgrade test fails - skr-kyma-to-kyma2-upgrade-aws-dev -
**Description**
Job https://status.build.kyma-project.io/job-history/gs/kyma-prow-logs/logs/skr-kyma-to-kyma2-upgrade-aws-dev fails:
```
1) SKR-Upgrade-test
Perform Upgrade:
Error: failed during upgradeKyma
at KCPWrapper.upgradeKyma (kcp/client.js:117:19)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Context.<anonymous> (skr-kyma-to-kyma2-upgrade/index.js:196:28)
```
Upgrade fails with timeout and reports no operation in the orchestration:
```
OrchestrationID: 2231f7ec-7a36-4232-b07e-6a8ccc440181
Waiting for Kyma Upgrade with OrchestrationID 2231f7ec-7a36-4232-b07e-6a8ccc440181 to succeed...
OrchestrationID: 2231f7ec-7a36-4232-b07e-6a8ccc440181 (upgradeKyma to version "2.0.0-rc4"), status: failed
No operations in OrchestrationID 2231f7ec-7a36-4232-b07e-6a8ccc440181
....
wait timeout
```
| test | upgrade test fails skr kyma to upgrade aws dev description job fails skr upgrade test perform upgrade error failed during upgradekyma at kcpwrapper upgradekyma kcp client js at runmicrotasks at processticksandrejections internal process task queues js at async context skr kyma to upgrade index js upgrade fails with timeout and reports no operation in the orchestration orchestrationid waiting for kyma upgrade with orchestrationid to succeed orchestrationid upgradekyma to version status failed no operations in orchestrationid wait timeout | 1 |
682,365 | 23,342,556,250 | IssuesEvent | 2022-08-09 15:05:48 | apache/hudi | https://api.github.com/repos/apache/hudi | closed | [SUPPORT] Querying Hudi Timeline causes Hive sync failure with AWS Glue Catalog | meta-sync aws-support priority:minor | **Describe the problem you faced**
I've been struggling with the failing synchronization with Glue Catalog. I have the process(AWS Glue Job) which reads from Hudi table and then writes to the Huid table as well. Data are being properly saved into S3 bucket but it fails on the Hive table synchronization - it is trying to create database and table that already exist in Glue Catalog. That started happening when we added a code which query Hudi table timeline - without that peace of code it works fine.
**To Reproduce**
You need to have already existing Hudi table on S3 bucket with the table in AWS Glue Catalog
Steps to reproduce the behavior:
1. Query the Hudi table timeline e.g. to get inflight instants
2. Read the data from Hudi table
3. Upsert data to Hudi table
Here you have the Glue Job script which can use to reproduce it - you need to provide S3 bucket and Glue database name(it cannot be default - you have to create separate database for that). Script contains the part to initialize the table on S3 with sample data. It has a 3 scenarios:
* scenario 1 (job result: SUCCESS) - without querying table timeline. Just to make sure it works properly.
* scenario 2 (job result: FAILURE) - with querying the table timeline before the reading data from table.
* scenario 3 (job result: SUCCESS) - same like scenario 2 but it invoke the `Hive.closeCurrent()` method before upserting data
Each step run as a separate Glue Job Run(including initializing the data sample)
```scala
import org.apache.hadoop.hive.ql.metadata.Hive
import org.apache.hudi.DataSourceReadOptions
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.common.fs.FSUtils
import org.apache.hudi.common.table.HoodieTableMetaClient
import org.apache.hudi.config.HoodieWriteConfig.TBL_NAME
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import org.apache.hudi.hive.ddl.HiveSyncMode
import org.apache.hudi.hive.util.ConfigUtils
import org.apache.hudi.keygen.ComplexKeyGenerator
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.hudi.HoodieOptionConfig.{SQL_KEY_PRECOMBINE_FIELD, SQL_KEY_TABLE_PRIMARY_KEY, SQL_KEY_TABLE_TYPE}
import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSparkSessionExtension}
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import java.sql.Timestamp
import java.time.Instant
import scala.collection.JavaConverters.{asScalaIteratorConverter, mapAsJavaMapConverter}
object TestJob {
val s3BucketName = "bucketName"
val glueDatabaseName = "databaseName" // cannot be default as for default it does not fails
val tableName = "test"
val tablePath = s"s3://$s3BucketName/$tableName"
val spark: SparkSession = SparkSession.builder()
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.withExtensions(new HoodieSparkSessionExtension())
.enableHiveSupport()
.getOrCreate()
def main(args: Array[String]): Unit = {
import spark.implicits._
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
// init the table on S3 with sample data
writeDF(List(
(1, "first", 2020, Timestamp.from(Instant.now())),
(2, "second", 2020, Timestamp.from(Instant.now())),
(3, "third", 2020, Timestamp.from(Instant.now())),
(4, "fourth", 2020, Timestamp.from(Instant.now())),
(5, "fifth", 2020, Timestamp.from(Instant.now()))
).toDF("id", "description", "year", "mod_date"))
// scenario1WithoutHoodieTableMetaClient()
// scenario2WithHoodieTableMetaClient()
// scenario3WithHoodieTableMetaClientAndHiveCloseCurrent()
}
def writeDF(df: DataFrame): Unit = {
new HudiUpsertTarget(
tableName = tableName,
databaseName = glueDatabaseName,
s3BucketName = s3BucketName,
partitionColumns = List("year"),
recordKeyColumns = List("id"),
preCombineKeyColumn = "mod_date"
).writeEntity(df)
}
def readTable(): DataFrame = {
spark.read
.format("org.apache.hudi")
.option(DataSourceReadOptions.QUERY_TYPE.key(), DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL)
.load(tablePath)
}
def queryHudiTimeline(): Unit = {
val fs = FSUtils.getFs(tablePath, spark.sparkContext.hadoopConfiguration)
val metaClient = HoodieTableMetaClient.builder().setConf(fs.getConf).setBasePath(tablePath).build()
val ongoingCommits = metaClient.getActiveTimeline.getCommitsTimeline
.filterInflightsAndRequested()
.getInstants.iterator().asScala.toList
println(ongoingCommits)
}
def scenario1WithoutHoodieTableMetaClient(): Unit = {
val sourceDF = readTable()
writeDF(sourceDF)
}
def scenario2WithHoodieTableMetaClient(): Unit = {
queryHudiTimeline()
val sourceDF = readTable()
writeDF(sourceDF)
}
def scenario3WithHoodieTableMetaClientAndHiveCloseCurrent(): Unit = {
queryHudiTimeline()
val sourceDF = readTable()
Hive.closeCurrent()
writeDF(sourceDF)
}
}
trait HudiWriterOptions {
def tableName: String
def databaseName: String
def s3BucketName: String
def fullTableName: String = s"$databaseName.$tableName"
def s3LocationPath: String = s"s3://$s3BucketName/$tableName/"
def partitionColumns: List[String]
def recordKeyColumns: List[String]
def preCombineKeyColumn: String
def hudiTableOptions = Map(
TABLE_TYPE.key() -> COW_TABLE_TYPE_OPT_VAL,
OPERATION.key() -> UPSERT_OPERATION_OPT_VAL,
TBL_NAME.key() -> tableName,
RECORDKEY_FIELD.key() -> recordKeyColumns.mkString(","),
PARTITIONPATH_FIELD.key() -> partitionColumns.mkString(","),
KEYGENERATOR_CLASS_NAME.key() -> classOf[ComplexKeyGenerator].getName,
PRECOMBINE_FIELD.key() -> preCombineKeyColumn,
URL_ENCODE_PARTITIONING.key() -> "false"
)
def sqlTableOptions = Map(
SQL_KEY_TABLE_TYPE.sqlKeyName -> HoodieOptionConfig.SQL_VALUE_TABLE_TYPE_COW,
SQL_KEY_TABLE_PRIMARY_KEY.sqlKeyName -> hudiTableOptions(RECORDKEY_FIELD.key()),
SQL_KEY_PRECOMBINE_FIELD.sqlKeyName -> hudiTableOptions(PRECOMBINE_FIELD.key())
)
def hiveTableOptions = Map(
HIVE_SYNC_MODE.key() -> HiveSyncMode.HMS.name(),
HIVE_SYNC_ENABLED.key() -> "true",
HIVE_DATABASE.key() -> databaseName,
HIVE_TABLE.key() -> hudiTableOptions(TBL_NAME.key()),
HIVE_PARTITION_FIELDS.key() -> hudiTableOptions(PARTITIONPATH_FIELD.key()),
HIVE_PARTITION_EXTRACTOR_CLASS.key() -> classOf[MultiPartKeysValueExtractor].getName,
HIVE_STYLE_PARTITIONING.key() -> "true",
HIVE_SUPPORT_TIMESTAMP_TYPE.key() -> "true",
HIVE_TABLE_SERDE_PROPERTIES.key() -> ConfigUtils.configToString(sqlTableOptions.asJava)
)
def writerOptions: Map[String, String] = hudiTableOptions ++ hiveTableOptions
}
class HudiUpsertTarget(override val tableName: String,
override val databaseName: String,
override val s3BucketName: String,
override val partitionColumns: List[String],
override val recordKeyColumns: List[String],
override val preCombineKeyColumn: String
) extends HudiWriterOptions {
lazy val spark: SparkSession = SparkSession.active
def writeEntity(dataFrame: DataFrame): Unit = {
if (!spark.catalog.tableExists(databaseName, tableName)) {
dataFrame.write
.format("org.apache.hudi")
.options(writerOptions)
.mode(SaveMode.Append)
.save(s3LocationPath)
} else {
upsertData(dataFrame)
}
}
private def upsertData(dataFrame: DataFrame) = {
val mergeOnStatement = recordKeyColumns
.map(k => s"hudi_target.$k = hudi_input.$k")
.mkString(" AND ")
val mergeIntoStatement =
s"""MERGE INTO $databaseName.$tableName AS hudi_target
| USING (SELECT * FROM source_data_set) hudi_input
| ON $mergeOnStatement
| WHEN MATCHED THEN UPDATE SET *
| WHEN NOT MATCHED THEN INSERT *
|""".stripMargin
dataFrame.createTempView("source_data_set")
spark.sql(mergeIntoStatement)
spark.catalog.dropTempView("source_data_set")
}
}
```
**Expected behavior**
Hive sync should be successful and job should complete without failure - it should detect that the database and table already exists in Glue catalog and there is no any schema change.
**Environment Description**
* Running on AWS Glue 2.0
* Hudi version : 0.10.1
* Spark version : 2.4
* Storage (HDFS/S3/GCS..) : S3
**Additional context**
Comparing Scenario 1 and 3 to Scenario 3 I noticed that in Scenario 3 it does not create the AWS Glue Client to sync the table metada. Here is the log that I can find for scenario 1 and 3 but not for 2:
```
metastore.AWSGlueClientFactory (AWSGlueClientFactory.java:newClient(55)): Setting glue service endpoint to https://glue.eu-west-1.amazonaws.com
```
**Stacktrace**
```
2022-05-26 11:23:08,800 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(91)): Exception in User Class
org.apache.hudi.exception.HoodieException: Got runtime exception when hive syncing test
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:118)
at org.apache.hudi.HoodieSparkSqlWriter$.org$apache$hudi$HoodieSparkSqlWriter$$syncHive(HoodieSparkSqlWriter.scala:539)
at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:595)
at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:591)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:591)
at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:664)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:284)
at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.executeUpsert(MergeIntoHoodieTableCommand.scala:285)
at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.run(MergeIntoHoodieTableCommand.scala:155)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
at HudiUpsertTarget.upsertData(test.scala:185)
at HudiUpsertTarget.writeEntity(test.scala:167)
at TestJob$.writeDF(test.scala:60)
at TestJob$.scenario2WithHoodieTableMetaClient(test.scala:87)
at TestJob$.main(test.scala:48)
at TestJob.main(test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazonaws.services.glue.SparkProcessLauncherPlugin$class.invoke(ProcessLauncher.scala:48)
at com.amazonaws.services.glue.ProcessLauncher$$anon$1.invoke(ProcessLauncher.scala:78)
at com.amazonaws.services.glue.ProcessLauncher.launch(ProcessLauncher.scala:142)
at com.amazonaws.services.glue.ProcessLauncher$.main(ProcessLauncher.scala:30)
at com.amazonaws.services.glue.ProcessLauncher.main(ProcessLauncher.scala)
Caused by: org.apache.hudi.hive.HoodieHiveSyncException: failed to create table test
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createTable(HMSDDLExecutor.java:129)
at org.apache.hudi.hive.HoodieHiveClient.createTable(HoodieHiveClient.java:213)
at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:243)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:184)
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:129)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:115)
... 37 more
Caused by: InvalidObjectException(message:databaseName)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1453)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy63.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:2050)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:97)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:669)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy64.createTable(Unknown Source)
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createTable(HMSDDLExecutor.java:126)
... 42 more
```
| 1.0 | [SUPPORT] Querying Hudi Timeline causes Hive sync failure with AWS Glue Catalog - **Describe the problem you faced**
I've been struggling with the failing synchronization with Glue Catalog. I have the process(AWS Glue Job) which reads from Hudi table and then writes to the Huid table as well. Data are being properly saved into S3 bucket but it fails on the Hive table synchronization - it is trying to create database and table that already exist in Glue Catalog. That started happening when we added a code which query Hudi table timeline - without that peace of code it works fine.
**To Reproduce**
You need to have already existing Hudi table on S3 bucket with the table in AWS Glue Catalog
Steps to reproduce the behavior:
1. Query the Hudi table timeline e.g. to get inflight instants
2. Read the data from Hudi table
3. Upsert data to Hudi table
Here you have the Glue Job script which can use to reproduce it - you need to provide S3 bucket and Glue database name(it cannot be default - you have to create separate database for that). Script contains the part to initialize the table on S3 with sample data. It has a 3 scenarios:
* scenario 1 (job result: SUCCESS) - without querying table timeline. Just to make sure it works properly.
* scenario 2 (job result: FAILURE) - with querying the table timeline before the reading data from table.
* scenario 3 (job result: SUCCESS) - same like scenario 2 but it invoke the `Hive.closeCurrent()` method before upserting data
Each step run as a separate Glue Job Run(including initializing the data sample)
```scala
import org.apache.hadoop.hive.ql.metadata.Hive
import org.apache.hudi.DataSourceReadOptions
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.common.fs.FSUtils
import org.apache.hudi.common.table.HoodieTableMetaClient
import org.apache.hudi.config.HoodieWriteConfig.TBL_NAME
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import org.apache.hudi.hive.ddl.HiveSyncMode
import org.apache.hudi.hive.util.ConfigUtils
import org.apache.hudi.keygen.ComplexKeyGenerator
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.hudi.HoodieOptionConfig.{SQL_KEY_PRECOMBINE_FIELD, SQL_KEY_TABLE_PRIMARY_KEY, SQL_KEY_TABLE_TYPE}
import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSparkSessionExtension}
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import java.sql.Timestamp
import java.time.Instant
import scala.collection.JavaConverters.{asScalaIteratorConverter, mapAsJavaMapConverter}
object TestJob {
val s3BucketName = "bucketName"
val glueDatabaseName = "databaseName" // cannot be default as for default it does not fails
val tableName = "test"
val tablePath = s"s3://$s3BucketName/$tableName"
val spark: SparkSession = SparkSession.builder()
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.withExtensions(new HoodieSparkSessionExtension())
.enableHiveSupport()
.getOrCreate()
def main(args: Array[String]): Unit = {
import spark.implicits._
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
// init the table on S3 with sample data
writeDF(List(
(1, "first", 2020, Timestamp.from(Instant.now())),
(2, "second", 2020, Timestamp.from(Instant.now())),
(3, "third", 2020, Timestamp.from(Instant.now())),
(4, "fourth", 2020, Timestamp.from(Instant.now())),
(5, "fifth", 2020, Timestamp.from(Instant.now()))
).toDF("id", "description", "year", "mod_date"))
// scenario1WithoutHoodieTableMetaClient()
// scenario2WithHoodieTableMetaClient()
// scenario3WithHoodieTableMetaClientAndHiveCloseCurrent()
}
def writeDF(df: DataFrame): Unit = {
new HudiUpsertTarget(
tableName = tableName,
databaseName = glueDatabaseName,
s3BucketName = s3BucketName,
partitionColumns = List("year"),
recordKeyColumns = List("id"),
preCombineKeyColumn = "mod_date"
).writeEntity(df)
}
def readTable(): DataFrame = {
spark.read
.format("org.apache.hudi")
.option(DataSourceReadOptions.QUERY_TYPE.key(), DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL)
.load(tablePath)
}
def queryHudiTimeline(): Unit = {
val fs = FSUtils.getFs(tablePath, spark.sparkContext.hadoopConfiguration)
val metaClient = HoodieTableMetaClient.builder().setConf(fs.getConf).setBasePath(tablePath).build()
val ongoingCommits = metaClient.getActiveTimeline.getCommitsTimeline
.filterInflightsAndRequested()
.getInstants.iterator().asScala.toList
println(ongoingCommits)
}
def scenario1WithoutHoodieTableMetaClient(): Unit = {
val sourceDF = readTable()
writeDF(sourceDF)
}
def scenario2WithHoodieTableMetaClient(): Unit = {
queryHudiTimeline()
val sourceDF = readTable()
writeDF(sourceDF)
}
def scenario3WithHoodieTableMetaClientAndHiveCloseCurrent(): Unit = {
queryHudiTimeline()
val sourceDF = readTable()
Hive.closeCurrent()
writeDF(sourceDF)
}
}
trait HudiWriterOptions {
def tableName: String
def databaseName: String
def s3BucketName: String
def fullTableName: String = s"$databaseName.$tableName"
def s3LocationPath: String = s"s3://$s3BucketName/$tableName/"
def partitionColumns: List[String]
def recordKeyColumns: List[String]
def preCombineKeyColumn: String
def hudiTableOptions = Map(
TABLE_TYPE.key() -> COW_TABLE_TYPE_OPT_VAL,
OPERATION.key() -> UPSERT_OPERATION_OPT_VAL,
TBL_NAME.key() -> tableName,
RECORDKEY_FIELD.key() -> recordKeyColumns.mkString(","),
PARTITIONPATH_FIELD.key() -> partitionColumns.mkString(","),
KEYGENERATOR_CLASS_NAME.key() -> classOf[ComplexKeyGenerator].getName,
PRECOMBINE_FIELD.key() -> preCombineKeyColumn,
URL_ENCODE_PARTITIONING.key() -> "false"
)
def sqlTableOptions = Map(
SQL_KEY_TABLE_TYPE.sqlKeyName -> HoodieOptionConfig.SQL_VALUE_TABLE_TYPE_COW,
SQL_KEY_TABLE_PRIMARY_KEY.sqlKeyName -> hudiTableOptions(RECORDKEY_FIELD.key()),
SQL_KEY_PRECOMBINE_FIELD.sqlKeyName -> hudiTableOptions(PRECOMBINE_FIELD.key())
)
def hiveTableOptions = Map(
HIVE_SYNC_MODE.key() -> HiveSyncMode.HMS.name(),
HIVE_SYNC_ENABLED.key() -> "true",
HIVE_DATABASE.key() -> databaseName,
HIVE_TABLE.key() -> hudiTableOptions(TBL_NAME.key()),
HIVE_PARTITION_FIELDS.key() -> hudiTableOptions(PARTITIONPATH_FIELD.key()),
HIVE_PARTITION_EXTRACTOR_CLASS.key() -> classOf[MultiPartKeysValueExtractor].getName,
HIVE_STYLE_PARTITIONING.key() -> "true",
HIVE_SUPPORT_TIMESTAMP_TYPE.key() -> "true",
HIVE_TABLE_SERDE_PROPERTIES.key() -> ConfigUtils.configToString(sqlTableOptions.asJava)
)
def writerOptions: Map[String, String] = hudiTableOptions ++ hiveTableOptions
}
class HudiUpsertTarget(override val tableName: String,
override val databaseName: String,
override val s3BucketName: String,
override val partitionColumns: List[String],
override val recordKeyColumns: List[String],
override val preCombineKeyColumn: String
) extends HudiWriterOptions {
lazy val spark: SparkSession = SparkSession.active
def writeEntity(dataFrame: DataFrame): Unit = {
if (!spark.catalog.tableExists(databaseName, tableName)) {
dataFrame.write
.format("org.apache.hudi")
.options(writerOptions)
.mode(SaveMode.Append)
.save(s3LocationPath)
} else {
upsertData(dataFrame)
}
}
private def upsertData(dataFrame: DataFrame) = {
val mergeOnStatement = recordKeyColumns
.map(k => s"hudi_target.$k = hudi_input.$k")
.mkString(" AND ")
val mergeIntoStatement =
s"""MERGE INTO $databaseName.$tableName AS hudi_target
| USING (SELECT * FROM source_data_set) hudi_input
| ON $mergeOnStatement
| WHEN MATCHED THEN UPDATE SET *
| WHEN NOT MATCHED THEN INSERT *
|""".stripMargin
dataFrame.createTempView("source_data_set")
spark.sql(mergeIntoStatement)
spark.catalog.dropTempView("source_data_set")
}
}
```
**Expected behavior**
Hive sync should be successful and job should complete without failure - it should detect that the database and table already exists in Glue catalog and there is no any schema change.
**Environment Description**
* Running on AWS Glue 2.0
* Hudi version : 0.10.1
* Spark version : 2.4
* Storage (HDFS/S3/GCS..) : S3
**Additional context**
Comparing Scenario 1 and 3 to Scenario 3 I noticed that in Scenario 3 it does not create the AWS Glue Client to sync the table metada. Here is the log that I can find for scenario 1 and 3 but not for 2:
```
metastore.AWSGlueClientFactory (AWSGlueClientFactory.java:newClient(55)): Setting glue service endpoint to https://glue.eu-west-1.amazonaws.com
```
**Stacktrace**
```
2022-05-26 11:23:08,800 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(91)): Exception in User Class
org.apache.hudi.exception.HoodieException: Got runtime exception when hive syncing test
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:118)
at org.apache.hudi.HoodieSparkSqlWriter$.org$apache$hudi$HoodieSparkSqlWriter$$syncHive(HoodieSparkSqlWriter.scala:539)
at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:595)
at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:591)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:591)
at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:664)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:284)
at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.executeUpsert(MergeIntoHoodieTableCommand.scala:285)
at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.run(MergeIntoHoodieTableCommand.scala:155)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
at HudiUpsertTarget.upsertData(test.scala:185)
at HudiUpsertTarget.writeEntity(test.scala:167)
at TestJob$.writeDF(test.scala:60)
at TestJob$.scenario2WithHoodieTableMetaClient(test.scala:87)
at TestJob$.main(test.scala:48)
at TestJob.main(test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazonaws.services.glue.SparkProcessLauncherPlugin$class.invoke(ProcessLauncher.scala:48)
at com.amazonaws.services.glue.ProcessLauncher$$anon$1.invoke(ProcessLauncher.scala:78)
at com.amazonaws.services.glue.ProcessLauncher.launch(ProcessLauncher.scala:142)
at com.amazonaws.services.glue.ProcessLauncher$.main(ProcessLauncher.scala:30)
at com.amazonaws.services.glue.ProcessLauncher.main(ProcessLauncher.scala)
Caused by: org.apache.hudi.hive.HoodieHiveSyncException: failed to create table test
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createTable(HMSDDLExecutor.java:129)
at org.apache.hudi.hive.HoodieHiveClient.createTable(HoodieHiveClient.java:213)
at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:243)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:184)
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:129)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:115)
... 37 more
Caused by: InvalidObjectException(message:databaseName)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1453)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy63.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:2050)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:97)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:669)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy64.createTable(Unknown Source)
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createTable(HMSDDLExecutor.java:126)
... 42 more
```
| non_test | querying hudi timeline causes hive sync failure with aws glue catalog describe the problem you faced i ve been struggling with the failing synchronization with glue catalog i have the process aws glue job which reads from hudi table and then writes to the huid table as well data are being properly saved into bucket but it fails on the hive table synchronization it is trying to create database and table that already exist in glue catalog that started happening when we added a code which query hudi table timeline without that peace of code it works fine to reproduce you need to have already existing hudi table on bucket with the table in aws glue catalog steps to reproduce the behavior query the hudi table timeline e g to get inflight instants read the data from hudi table upsert data to hudi table here you have the glue job script which can use to reproduce it you need to provide bucket and glue database name it cannot be default you have to create separate database for that script contains the part to initialize the table on with sample data it has a scenarios scenario job result success without querying table timeline just to make sure it works properly scenario job result failure with querying the table timeline before the reading data from table scenario job result success same like scenario but it invoke the hive closecurrent method before upserting data each step run as a separate glue job run including initializing the data sample scala import org apache hadoop hive ql metadata hive import org apache hudi datasourcereadoptions import org apache hudi datasourcewriteoptions import org apache hudi common fs fsutils import org apache hudi common table hoodietablemetaclient import org apache hudi config hoodiewriteconfig tbl name import org apache hudi hive multipartkeysvalueextractor import org apache hudi hive ddl hivesyncmode import org apache hudi hive util configutils import org apache hudi keygen complexkeygenerator import org apache level logger import org apache spark sql hudi hoodieoptionconfig sql key precombine field sql key table primary key sql key table type import org apache spark sql hudi hoodieoptionconfig hoodiesparksessionextension import org apache spark sql dataframe savemode sparksession import java sql timestamp import java time instant import scala collection javaconverters asscalaiteratorconverter mapasjavamapconverter object testjob val bucketname val gluedatabasename databasename cannot be default as for default it does not fails val tablename test val tablepath s tablename val spark sparksession sparksession builder config spark serializer org apache spark serializer kryoserializer withextensions new hoodiesparksessionextension enablehivesupport getorcreate def main args array unit import spark implicits logger getlogger org apache spark setlevel level warn init the table on with sample data writedf list first timestamp from instant now second timestamp from instant now third timestamp from instant now fourth timestamp from instant now fifth timestamp from instant now todf id description year mod date def writedf df dataframe unit new hudiupserttarget tablename tablename databasename gluedatabasename partitioncolumns list year recordkeycolumns list id precombinekeycolumn mod date writeentity df def readtable dataframe spark read format org apache hudi option datasourcereadoptions query type key datasourcereadoptions query type snapshot opt val load tablepath def queryhuditimeline unit val fs fsutils getfs tablepath spark sparkcontext hadoopconfiguration val metaclient hoodietablemetaclient builder setconf fs getconf setbasepath tablepath build val ongoingcommits metaclient getactivetimeline getcommitstimeline filterinflightsandrequested getinstants iterator asscala tolist println ongoingcommits def unit val sourcedf readtable writedf sourcedf def unit queryhuditimeline val sourcedf readtable writedf sourcedf def unit queryhuditimeline val sourcedf readtable hive closecurrent writedf sourcedf trait hudiwriteroptions def tablename string def databasename string def string def fulltablename string s databasename tablename def string s tablename def partitioncolumns list def recordkeycolumns list def precombinekeycolumn string def huditableoptions map table type key cow table type opt val operation key upsert operation opt val tbl name key tablename recordkey field key recordkeycolumns mkstring partitionpath field key partitioncolumns mkstring keygenerator class name key classof getname precombine field key precombinekeycolumn url encode partitioning key false def sqltableoptions map sql key table type sqlkeyname hoodieoptionconfig sql value table type cow sql key table primary key sqlkeyname huditableoptions recordkey field key sql key precombine field sqlkeyname huditableoptions precombine field key def hivetableoptions map hive sync mode key hivesyncmode hms name hive sync enabled key true hive database key databasename hive table key huditableoptions tbl name key hive partition fields key huditableoptions partitionpath field key hive partition extractor class key classof getname hive style partitioning key true hive support timestamp type key true hive table serde properties key configutils configtostring sqltableoptions asjava def writeroptions map huditableoptions hivetableoptions class hudiupserttarget override val tablename string override val databasename string override val string override val partitioncolumns list override val recordkeycolumns list override val precombinekeycolumn string extends hudiwriteroptions lazy val spark sparksession sparksession active def writeentity dataframe dataframe unit if spark catalog tableexists databasename tablename dataframe write format org apache hudi options writeroptions mode savemode append save else upsertdata dataframe private def upsertdata dataframe dataframe val mergeonstatement recordkeycolumns map k s hudi target k hudi input k mkstring and val mergeintostatement s merge into databasename tablename as hudi target using select from source data set hudi input on mergeonstatement when matched then update set when not matched then insert stripmargin dataframe createtempview source data set spark sql mergeintostatement spark catalog droptempview source data set expected behavior hive sync should be successful and job should complete without failure it should detect that the database and table already exists in glue catalog and there is no any schema change environment description running on aws glue hudi version spark version storage hdfs gcs additional context comparing scenario and to scenario i noticed that in scenario it does not create the aws glue client to sync the table metada here is the log that i can find for scenario and but not for metastore awsglueclientfactory awsglueclientfactory java newclient setting glue service endpoint to stacktrace error glue processlauncher logging scala logerror exception in user class org apache hudi exception hoodieexception got runtime exception when hive syncing test at org apache hudi hive hivesynctool synchoodietable hivesynctool java at org apache hudi hoodiesparksqlwriter org apache hudi hoodiesparksqlwriter synchive hoodiesparksqlwriter scala at org apache hudi hoodiesparksqlwriter anonfun metasync apply hoodiesparksqlwriter scala at org apache hudi hoodiesparksqlwriter anonfun metasync apply hoodiesparksqlwriter scala at scala collection mutable hashset foreach hashset scala at org apache hudi hoodiesparksqlwriter metasync hoodiesparksqlwriter scala at org apache hudi hoodiesparksqlwriter commitandperformpostoperations hoodiesparksqlwriter scala at org apache hudi hoodiesparksqlwriter write hoodiesparksqlwriter scala at org apache spark sql hudi command mergeintohoodietablecommand executeupsert mergeintohoodietablecommand scala at org apache spark sql hudi command mergeintohoodietablecommand run mergeintohoodietablecommand scala at org apache spark sql execution command executedcommandexec sideeffectresult lzycompute commands scala at org apache spark sql execution command executedcommandexec sideeffectresult commands scala at org apache spark sql execution command executedcommandexec executecollect commands scala at org apache spark sql dataset anonfun apply dataset scala at org apache spark sql dataset anonfun apply dataset scala at org apache spark sql dataset anonfun apply dataset scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid apply sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql dataset withaction dataset scala at org apache spark sql dataset dataset scala at org apache spark sql dataset ofrows dataset scala at org apache spark sql sparksession sql sparksession scala at hudiupserttarget upsertdata test scala at hudiupserttarget writeentity test scala at testjob writedf test scala at testjob test scala at testjob main test scala at testjob main test scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com amazonaws services glue sparkprocesslauncherplugin class invoke processlauncher scala at com amazonaws services glue processlauncher anon invoke processlauncher scala at com amazonaws services glue processlauncher launch processlauncher scala at com amazonaws services glue processlauncher main processlauncher scala at com amazonaws services glue processlauncher main processlauncher scala caused by org apache hudi hive hoodiehivesyncexception failed to create table test at org apache hudi hive ddl hmsddlexecutor createtable hmsddlexecutor java at org apache hudi hive hoodiehiveclient createtable hoodiehiveclient java at org apache hudi hive hivesynctool syncschema hivesynctool java at org apache hudi hive hivesynctool synchoodietable hivesynctool java at org apache hudi hive hivesynctool dosync hivesynctool java at org apache hudi hive hivesynctool synchoodietable hivesynctool java more caused by invalidobjectexception message databasename at org apache hadoop hive metastore hivemetastore hmshandler create table with environment context hivemetastore java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache hadoop hive metastore retryinghmshandler invoke retryinghmshandler java at com sun proxy create table with environment context unknown source at org apache hadoop hive metastore hivemetastoreclient create table with environment context hivemetastoreclient java at org apache hadoop hive ql metadata sessionhivemetastoreclient create table with environment context sessionhivemetastoreclient java at org apache hadoop hive metastore hivemetastoreclient createtable hivemetastoreclient java at org apache hadoop hive metastore hivemetastoreclient createtable hivemetastoreclient java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache hadoop hive metastore retryingmetastoreclient invoke retryingmetastoreclient java at com sun proxy createtable unknown source at org apache hudi hive ddl hmsddlexecutor createtable hmsddlexecutor java more | 0 |
768,505 | 26,966,726,648 | IssuesEvent | 2023-02-08 23:11:56 | WordPress/openverse-frontend | https://api.github.com/repos/WordPress/openverse-frontend | closed | Audio keeps playing on a single image result | good first issue help wanted ๐ฉ priority: low ๐ goal: fix ๐น aspect: interface | ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
On desktop, audio keeps playing when the image result is clicked after playing an audio from the results page.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. <!-- Step 1 ... -->Search for something on desktop in the All content view.
2. <!-- Step 2 ... -->Play a longer audio.
3. <!-- Step 3 ... -->Click on an image result.
4. Hear error: the audio keeps playing, and there is no UI control for the user to stop it.
## Additional context
First reported by @dhruvkb.
<!-- Add any other context about the problem here; or delete the section entirely. -->
<!-- If you would like to work on this, please comment below separately. -->
| 1.0 | Audio keeps playing on a single image result - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
On desktop, audio keeps playing when the image result is clicked after playing an audio from the results page.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. <!-- Step 1 ... -->Search for something on desktop in the All content view.
2. <!-- Step 2 ... -->Play a longer audio.
3. <!-- Step 3 ... -->Click on an image result.
4. Hear error: the audio keeps playing, and there is no UI control for the user to stop it.
## Additional context
First reported by @dhruvkb.
<!-- Add any other context about the problem here; or delete the section entirely. -->
<!-- If you would like to work on this, please comment below separately. -->
| non_test | audio keeps playing on a single image result description on desktop audio keeps playing when the image result is clicked after playing an audio from the results page reproduction search for something on desktop in the all content view play a longer audio click on an image result hear error the audio keeps playing and there is no ui control for the user to stop it additional context first reported by dhruvkb | 0 |
54,435 | 23,265,585,807 | IssuesEvent | 2022-08-04 17:02:32 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | Certbot STRA | ops and shared services | **Describe the issue**
Create an STRA for Certbot
**What is the Value/Impact?**
Identify risks/mitigations. Provides critical information to product/platform owners on use of tool.
**What is the plan? How will this get completed?**
What are the key components of this task?
- [ ] Document architecture
- [ ] Document usage
- [ ] Identify risks
- [ ] Create SoAR
- [ ] Review and sign-off
**Identify any dependencies**
- [ ] Details collection from Billy
- [ ] Review with Olena
- [ ] Sign-off by MISO, CIO
**Definition of done**
- [ ] Report complete
- [ ] SoAR signed off
| 1.0 | Certbot STRA - **Describe the issue**
Create an STRA for Certbot
**What is the Value/Impact?**
Identify risks/mitigations. Provides critical information to product/platform owners on use of tool.
**What is the plan? How will this get completed?**
What are the key components of this task?
- [ ] Document architecture
- [ ] Document usage
- [ ] Identify risks
- [ ] Create SoAR
- [ ] Review and sign-off
**Identify any dependencies**
- [ ] Details collection from Billy
- [ ] Review with Olena
- [ ] Sign-off by MISO, CIO
**Definition of done**
- [ ] Report complete
- [ ] SoAR signed off
| non_test | certbot stra describe the issue create an stra for certbot what is the value impact identify risks mitigations provides critical information to product platform owners on use of tool what is the plan how will this get completed what are the key components of this task document architecture document usage identify risks create soar review and sign off identify any dependencies details collection from billy review with olena sign off by miso cio definition of done report complete soar signed off | 0 |
61,681 | 6,748,236,424 | IssuesEvent | 2017-10-22 03:21:22 | VOREStation/VOREStation | https://api.github.com/repos/VOREStation/VOREStation | closed | Airlock wires disassembly | Pri: 3-Moderate Status: No Response Task: Needs Testing Type: Polaris Bug | You can spamclick to cut wires out of airlocks and get infinite wires when the do_after completes. It should probably check AFTER the do_after if it's still in the right state for this, and should have set_click_cooldown | 1.0 | Airlock wires disassembly - You can spamclick to cut wires out of airlocks and get infinite wires when the do_after completes. It should probably check AFTER the do_after if it's still in the right state for this, and should have set_click_cooldown | test | airlock wires disassembly you can spamclick to cut wires out of airlocks and get infinite wires when the do after completes it should probably check after the do after if it s still in the right state for this and should have set click cooldown | 1 |
172,149 | 13,263,346,671 | IssuesEvent | 2020-08-21 00:15:14 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | jobs: remove jobs that have been continuously failing for over N days | area/jobs kind/cleanup lifecycle/frozen priority/important-soon sig/testing | I'm going to suggest N=120 for now. I'd like to do this one more time as a human to learn what the right detection methods are for straight-up-failing vs. seriously flaky, how/when to reach people in case there are some brave souls out there who want to save these jobs, etc.
Next step would be to automate at least some portion of this.
ref: https://github.com/kubernetes/test-infra/issues/2528#issuecomment-392936145 | 1.0 | jobs: remove jobs that have been continuously failing for over N days - I'm going to suggest N=120 for now. I'd like to do this one more time as a human to learn what the right detection methods are for straight-up-failing vs. seriously flaky, how/when to reach people in case there are some brave souls out there who want to save these jobs, etc.
Next step would be to automate at least some portion of this.
ref: https://github.com/kubernetes/test-infra/issues/2528#issuecomment-392936145 | test | jobs remove jobs that have been continuously failing for over n days i m going to suggest n for now i d like to do this one more time as a human to learn what the right detection methods are for straight up failing vs seriously flaky how when to reach people in case there are some brave souls out there who want to save these jobs etc next step would be to automate at least some portion of this ref | 1 |
226,600 | 17,356,884,795 | IssuesEvent | 2021-07-29 15:19:56 | udistrital/financiera_documentacion | https://api.github.com/repos/udistrital/financiera_documentacion | opened | Elaborar presentaciรณn para la reuniรณ con OAP | Documentation | Se ajusta una de las presentaciones que se han venido trabajando y se comparte con el ingeniero Alex y la Jefe Beatriz
https://docs.google.com/presentation/d/1PqcHGzgWeQLDKCsxRjdkCrvCByEG0M6OwAFamGEDKg8/edit?usp=sharing | 1.0 | Elaborar presentaciรณn para la reuniรณ con OAP - Se ajusta una de las presentaciones que se han venido trabajando y se comparte con el ingeniero Alex y la Jefe Beatriz
https://docs.google.com/presentation/d/1PqcHGzgWeQLDKCsxRjdkCrvCByEG0M6OwAFamGEDKg8/edit?usp=sharing | non_test | elaborar presentaciรณn para la reuniรณ con oap se ajusta una de las presentaciones que se han venido trabajando y se comparte con el ingeniero alex y la jefe beatriz | 0 |
756,652 | 26,479,983,317 | IssuesEvent | 2023-01-17 14:03:11 | WordPress/openverse-frontend | https://api.github.com/repos/WordPress/openverse-frontend | closed | New spacings in Footer component | ๐จ priority: medium โจ goal: improvement ๐น aspect: interface | ## Problem
After revisiting the WordPress brand mention in the new homepage (#1433), we decided to use the WordPress symbol' simplified version (#2068). The new homepage layout simplifies the spacing approach, so the footer needs to follow that rule.
## Description
The spacing change only affects **Footer internal**. The component was updated in the Design Library.

The changes are:
#### `xs` breakpoint
* `24px` padding for all sides.
* `16px` gap between locale component and WordPress mention.
#### `sm` and `md` breakpoints
* `24px`padding for all sides.
#### `lg`, `xl`, and `2xl` breakpoints
* `40px`padding for all sides.
When placing the component in homepage, it has a transparent background and no border to blend with the layout. See #1433 for more details.
## Additional context
The new footer was recently integrated in #2015 | 1.0 | New spacings in Footer component - ## Problem
After revisiting the WordPress brand mention in the new homepage (#1433), we decided to use the WordPress symbol' simplified version (#2068). The new homepage layout simplifies the spacing approach, so the footer needs to follow that rule.
## Description
The spacing change only affects **Footer internal**. The component was updated in the Design Library.

The changes are:
#### `xs` breakpoint
* `24px` padding for all sides.
* `16px` gap between locale component and WordPress mention.
#### `sm` and `md` breakpoints
* `24px`padding for all sides.
#### `lg`, `xl`, and `2xl` breakpoints
* `40px`padding for all sides.
When placing the component in homepage, it has a transparent background and no border to blend with the layout. See #1433 for more details.
## Additional context
The new footer was recently integrated in #2015 | non_test | new spacings in footer component problem after revisiting the wordpress brand mention in the new homepage we decided to use the wordpress symbol simplified version the new homepage layout simplifies the spacing approach so the footer needs to follow that rule description the spacing change only affects footer internal the component was updated in the design library the changes are xs breakpoint padding for all sides gap between locale component and wordpress mention sm and md breakpoints padding for all sides lg xl and breakpoints padding for all sides when placing the component in homepage it has a transparent background and no border to blend with the layout see for more details additional context the new footer was recently integrated in | 0 |
20,765 | 3,836,085,583 | IssuesEvent | 2016-04-01 16:37:19 | Marcelo-Theodoro/web2commerce | https://api.github.com/repos/Marcelo-Theodoro/web2commerce | closed | info: fale_conosco ajustar o form. | Testes | Campos Empresa, cidade, telefone, uf, cidade sรฃo opcionais.
Quando der o submit mostrar mensagem que foi enviado com sucesso.
Centralizar campos na views. | 1.0 | info: fale_conosco ajustar o form. - Campos Empresa, cidade, telefone, uf, cidade sรฃo opcionais.
Quando der o submit mostrar mensagem que foi enviado com sucesso.
Centralizar campos na views. | test | info fale conosco ajustar o form campos empresa cidade telefone uf cidade sรฃo opcionais quando der o submit mostrar mensagem que foi enviado com sucesso centralizar campos na views | 1 |
319,079 | 9,733,963,167 | IssuesEvent | 2019-05-31 11:07:29 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.mercadolibre.com.ar - desktop site instead of mobile site | browser-firefox engine-gecko priority-important | <!-- @browser: Firefox 67.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0 -->
<!-- @reported_with: web -->
**URL**: https://www.mercadolibre.com.ar/
**Browser / Version**: Firefox 67.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: images wont load
**Steps to Reproduce**:
images wont load
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | 1.0 | www.mercadolibre.com.ar - desktop site instead of mobile site - <!-- @browser: Firefox 67.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0 -->
<!-- @reported_with: web -->
**URL**: https://www.mercadolibre.com.ar/
**Browser / Version**: Firefox 67.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: images wont load
**Steps to Reproduce**:
images wont load
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | non_test | desktop site instead of mobile site url browser version firefox operating system windows tested another browser yes problem type desktop site instead of mobile site description images wont load steps to reproduce images wont load browser configuration none from with โค๏ธ | 0 |
252,746 | 21,628,890,478 | IssuesEvent | 2022-05-05 07:34:14 | tendermint/spn | https://api.github.com/repos/tendermint/spn | closed | Remove `MsgSendVouchers` operation in simulation | test campaign | `bank` simulation tests simulate operations for sending any tokens. Therefore, this operation is not necessary for simulation entropy
We should remove `SimulateMsgSendVouchers` from `campaign` simulation tests | 1.0 | Remove `MsgSendVouchers` operation in simulation - `bank` simulation tests simulate operations for sending any tokens. Therefore, this operation is not necessary for simulation entropy
We should remove `SimulateMsgSendVouchers` from `campaign` simulation tests | test | remove msgsendvouchers operation in simulation bank simulation tests simulate operations for sending any tokens therefore this operation is not necessary for simulation entropy we should remove simulatemsgsendvouchers from campaign simulation tests | 1 |
313,883 | 26,960,646,223 | IssuesEvent | 2023-02-08 17:54:56 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | opened | [FLAKEY TEST] odis-test -> phone-number-privacy/combiner -> pnpService -> with n=3, t=2 -> when signers are operating correctly -> /quotaStatus -> Should respond with 200 on repeated valid requests | FLAKEY phone-number-privacy odis-test | Discovered at commit bcd1e8fe517aba1374b3962c5d0741e07698fe22
Attempt No. 1:
Error: expect(received).toBe(expected) // Object.is equality
Expected: 200
Received: 500
at Object.<anonymous> (/home/circleci/app/packages/phone-number-privacy/combiner/test/integration/pnp.test.ts:418:31)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at _callCircusTest (/home/circleci/app/node_modules/jest-circus/build/run.js:212:5)
at _runTest (/home/circleci/app/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at run (/home/circleci/app/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/home/circleci/app/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/home/circleci/app/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/home/circleci/app/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/home/circleci/app/node_modules/jest-runner/build/runTest.js:472:34)
Attempt No. 2:
Test Passed!
| 1.0 | [FLAKEY TEST] odis-test -> phone-number-privacy/combiner -> pnpService -> with n=3, t=2 -> when signers are operating correctly -> /quotaStatus -> Should respond with 200 on repeated valid requests - Discovered at commit bcd1e8fe517aba1374b3962c5d0741e07698fe22
Attempt No. 1:
Error: expect(received).toBe(expected) // Object.is equality
Expected: 200
Received: 500
at Object.<anonymous> (/home/circleci/app/packages/phone-number-privacy/combiner/test/integration/pnp.test.ts:418:31)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at _callCircusTest (/home/circleci/app/node_modules/jest-circus/build/run.js:212:5)
at _runTest (/home/circleci/app/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/home/circleci/app/node_modules/jest-circus/build/run.js:57:9)
at run (/home/circleci/app/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/home/circleci/app/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/home/circleci/app/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/home/circleci/app/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/home/circleci/app/node_modules/jest-runner/build/runTest.js:472:34)
Attempt No. 2:
Test Passed!
| test | odis test phone number privacy combiner pnpservice with n t when signers are operating correctly quotastatus should respond with on repeated valid requests discovered at commit attempt no error expect received tobe expected object is equality expected received at object home circleci app packages phone number privacy combiner test integration pnp test ts at processticksandrejections internal process task queues js at callcircustest home circleci app node modules jest circus build run js at runtest home circleci app node modules jest circus build run js at runtestsfordescribeblock home circleci app node modules jest circus build run js at runtestsfordescribeblock home circleci app node modules jest circus build run js at runtestsfordescribeblock home circleci app node modules jest circus build run js at runtestsfordescribeblock home circleci app node modules jest circus build run js at runtestsfordescribeblock home circleci app node modules jest circus build run js at run home circleci app node modules jest circus build run js at runandtransformresultstojestformat home circleci app node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter home circleci app node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal home circleci app node modules jest runner build runtest js at runtest home circleci app node modules jest runner build runtest js attempt no test passed | 1 |
203,959 | 15,395,922,334 | IssuesEvent | 2021-03-03 19:54:10 | vivitek/deep-thought | https://api.github.com/repos/vivitek/deep-thought | closed | DHCP request test - Assigning an IP | test | ** Feature you want to test or to be tested **
Test if dhcp server does receive and bound ip addresses in all cases
### Containers targeted
Concerns container: **DHCP**
### Method
**Unit testing**
Can be tested in TravisCI: **perhaps**
Language/Framework: mocha
| 1.0 | DHCP request test - Assigning an IP - ** Feature you want to test or to be tested **
Test if dhcp server does receive and bound ip addresses in all cases
### Containers targeted
Concerns container: **DHCP**
### Method
**Unit testing**
Can be tested in TravisCI: **perhaps**
Language/Framework: mocha
| test | dhcp request test assigning an ip feature you want to test or to be tested test if dhcp server does receive and bound ip addresses in all cases containers targeted concerns container dhcp method unit testing can be tested in travisci perhaps language framework mocha | 1 |
193,414 | 14,652,643,452 | IssuesEvent | 2020-12-28 02:41:55 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | vmware/vic: cmd/vic-machine/common/proxy_test.go; 8 LoC | fresh test tiny |
Found a possible issue in [vmware/vic](https://www.github.com/vmware/vic) at [cmd/vic-machine/common/proxy_test.go](https://github.com/vmware/vic/blob/2be3feb55003a3ee679c4ff90921f966044111d4/cmd/vic-machine/common/proxy_test.go#L71-L78)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to bhttp was used in a composite literal at line 73
[Click here to see the code in its original context.](https://github.com/vmware/vic/blob/2be3feb55003a3ee679c4ff90921f966044111d4/cmd/vic-machine/common/proxy_test.go#L71-L78)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bhttp := range burls {
for _, ghttps := range gurls {
bproxy := Proxies{HTTPProxy: &bhttp, HTTPSProxy: &ghttps}
_, _, _, err := bproxy.ProcessProxies()
assert.Error(t, err, "Expected %s to be rejected", bhttp)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2be3feb55003a3ee679c4ff90921f966044111d4
| 1.0 | vmware/vic: cmd/vic-machine/common/proxy_test.go; 8 LoC -
Found a possible issue in [vmware/vic](https://www.github.com/vmware/vic) at [cmd/vic-machine/common/proxy_test.go](https://github.com/vmware/vic/blob/2be3feb55003a3ee679c4ff90921f966044111d4/cmd/vic-machine/common/proxy_test.go#L71-L78)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to bhttp was used in a composite literal at line 73
[Click here to see the code in its original context.](https://github.com/vmware/vic/blob/2be3feb55003a3ee679c4ff90921f966044111d4/cmd/vic-machine/common/proxy_test.go#L71-L78)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bhttp := range burls {
for _, ghttps := range gurls {
bproxy := Proxies{HTTPProxy: &bhttp, HTTPSProxy: &ghttps}
_, _, _, err := bproxy.ProcessProxies()
assert.Error(t, err, "Expected %s to be rejected", bhttp)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2be3feb55003a3ee679c4ff90921f966044111d4
| test | vmware vic cmd vic machine common proxy test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to bhttp was used in a composite literal at line click here to show the line s of go which triggered the analyzer go for bhttp range burls for ghttps range gurls bproxy proxies httpproxy bhttp httpsproxy ghttps err bproxy processproxies assert error t err expected s to be rejected bhttp leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
165,994 | 20,691,584,741 | IssuesEvent | 2022-03-11 01:10:15 | swagger-api/swagger-codegen | https://api.github.com/repos/swagger-api/swagger-codegen | reopened | CVE-2018-1272 (High) detected in spring-core-4.2.7.RELEASE.jar, spring-core-4.3.9.RELEASE.jar | security vulnerability | ## CVE-2018-1272 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-core-4.2.7.RELEASE.jar</b>, <b>spring-core-4.3.9.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-core-4.2.7.RELEASE.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /samples/composed/client/petstore/java/retrofit2-play25/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.2.7.RELEASE/3d08f6f68e0654bf4be50559aec4218334189583/spring-core-4.2.7.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.2.7.RELEASE/3d08f6f68e0654bf4be50559aec4218334189583/spring-core-4.2.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- play-java-ws_2.11-2.5.14.jar (Root Library)
- play-java_2.11-2.5.14.jar
- :x: **spring-core-4.2.7.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-core-4.3.9.RELEASE.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /samples/client/petstore/java/resttemplate-withXml/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-web-4.3.9.RELEASE.jar (Root Library)
- :x: **spring-core-4.3.9.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branches: <b>3.0.0, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, versions 5.0 prior to 5.0.5 and versions 4.3 prior to 4.3.15 and older unsupported versions, provide client-side support for multipart requests. When Spring MVC or Spring WebFlux server application (server A) receives input from a remote client, and then uses that input to make a multipart request to another server (server B), it can be exposed to an attack, where an extra multipart is inserted in the content of the request from server A, causing server B to use the wrong value for a part it expects. This could to lead privilege escalation, for example, if the part content represents a username or user roles.
<p>Publish Date: 2018-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1272>CVE-2018-1272</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2018-1272">https://tanzu.vmware.com/security/cve-2018-1272</a></p>
<p>Release Date: 2018-04-06</p>
<p>Fix Resolution: org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"4.2.7.RELEASE","packageFilePaths":["/samples/composed/client/petstore/java/retrofit2-play25/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.typesafe.play:play-java-ws_2.11:2.5.14;com.typesafe.play:play-java_2.11:2.5.14;org.springframework:spring-core:4.2.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"4.3.9.RELEASE","packageFilePaths":["/samples/client/petstore/java/resttemplate-withXml/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework:spring-web:4.3.9.RELEASE;org.springframework:spring-core:4.3.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE","isBinary":false}],"baseBranches":["3.0.0","master"],"vulnerabilityIdentifier":"CVE-2018-1272","vulnerabilityDetails":"Spring Framework, versions 5.0 prior to 5.0.5 and versions 4.3 prior to 4.3.15 and older unsupported versions, provide client-side support for multipart requests. When Spring MVC or Spring WebFlux server application (server A) receives input from a remote client, and then uses that input to make a multipart request to another server (server B), it can be exposed to an attack, where an extra multipart is inserted in the content of the request from server A, causing server B to use the wrong value for a part it expects. This could to lead privilege escalation, for example, if the part content represents a username or user roles.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1272","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-1272 (High) detected in spring-core-4.2.7.RELEASE.jar, spring-core-4.3.9.RELEASE.jar - ## CVE-2018-1272 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-core-4.2.7.RELEASE.jar</b>, <b>spring-core-4.3.9.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-core-4.2.7.RELEASE.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /samples/composed/client/petstore/java/retrofit2-play25/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.2.7.RELEASE/3d08f6f68e0654bf4be50559aec4218334189583/spring-core-4.2.7.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.2.7.RELEASE/3d08f6f68e0654bf4be50559aec4218334189583/spring-core-4.2.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- play-java-ws_2.11-2.5.14.jar (Root Library)
- play-java_2.11-2.5.14.jar
- :x: **spring-core-4.2.7.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-core-4.3.9.RELEASE.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /samples/client/petstore/java/resttemplate-withXml/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/4.3.9.RELEASE/430b7298bfb85d66fb61e19ca8f06231b911e9f5/spring-core-4.3.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-web-4.3.9.RELEASE.jar (Root Library)
- :x: **spring-core-4.3.9.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branches: <b>3.0.0, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, versions 5.0 prior to 5.0.5 and versions 4.3 prior to 4.3.15 and older unsupported versions, provide client-side support for multipart requests. When Spring MVC or Spring WebFlux server application (server A) receives input from a remote client, and then uses that input to make a multipart request to another server (server B), it can be exposed to an attack, where an extra multipart is inserted in the content of the request from server A, causing server B to use the wrong value for a part it expects. This could to lead privilege escalation, for example, if the part content represents a username or user roles.
<p>Publish Date: 2018-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1272>CVE-2018-1272</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2018-1272">https://tanzu.vmware.com/security/cve-2018-1272</a></p>
<p>Release Date: 2018-04-06</p>
<p>Fix Resolution: org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"4.2.7.RELEASE","packageFilePaths":["/samples/composed/client/petstore/java/retrofit2-play25/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.typesafe.play:play-java-ws_2.11:2.5.14;com.typesafe.play:play-java_2.11:2.5.14;org.springframework:spring-core:4.2.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"4.3.9.RELEASE","packageFilePaths":["/samples/client/petstore/java/resttemplate-withXml/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework:spring-web:4.3.9.RELEASE;org.springframework:spring-core:4.3.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-core:4.3.15.RELEASE,5.0.5.RELEASE;org.springframework:spring-web:4.3.15.RELEASE,5.0.5.RELEASE","isBinary":false}],"baseBranches":["3.0.0","master"],"vulnerabilityIdentifier":"CVE-2018-1272","vulnerabilityDetails":"Spring Framework, versions 5.0 prior to 5.0.5 and versions 4.3 prior to 4.3.15 and older unsupported versions, provide client-side support for multipart requests. When Spring MVC or Spring WebFlux server application (server A) receives input from a remote client, and then uses that input to make a multipart request to another server (server B), it can be exposed to an attack, where an extra multipart is inserted in the content of the request from server A, causing server B to use the wrong value for a part it expects. This could to lead privilege escalation, for example, if the part content represents a username or user roles.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1272","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in spring core release jar spring core release jar cve high severity vulnerability vulnerable libraries spring core release jar spring core release jar spring core release jar spring core library home page a href path to dependency file samples composed client petstore java build gradle path to vulnerable library home wss scanner gradle caches modules files org springframework spring core release spring core release jar home wss scanner gradle caches modules files org springframework spring core release spring core release jar dependency hierarchy play java ws jar root library play java jar x spring core release jar vulnerable library spring core release jar spring core library home page a href path to dependency file samples client petstore java resttemplate withxml build gradle path to vulnerable library home wss scanner gradle caches modules files org springframework spring core release spring core release jar home wss scanner gradle caches modules files org springframework spring core release spring core release jar home wss scanner gradle caches modules files org springframework spring core release spring core release jar home wss scanner gradle caches modules files org springframework spring core release spring core release jar dependency hierarchy spring web release jar root library x spring core release jar vulnerable library found in head commit a href found in base branches master vulnerability details spring framework versions prior to and versions prior to and older unsupported versions provide client side support for multipart requests when spring mvc or spring webflux server application server a receives input from a remote client and then uses that input to make a multipart request to another server server b it can be exposed to an attack where an extra multipart is inserted in the content of the request from server a causing server b to use the wrong value for a part it expects this could to lead privilege escalation for example if the part content represents a username or user roles publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring core release release org springframework spring web release release isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com typesafe play play java ws com typesafe play play java org springframework spring core release isminimumfixversionavailable true minimumfixversion org springframework spring core release release org springframework spring web release release isbinary false packagetype java groupid org springframework packagename spring core packageversion release packagefilepaths istransitivedependency true dependencytree org springframework spring web release org springframework spring core release isminimumfixversionavailable true minimumfixversion org springframework spring core release release org springframework spring web release release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails spring framework versions prior to and versions prior to and older unsupported versions provide client side support for multipart requests when spring mvc or spring webflux server application server a receives input from a remote client and then uses that input to make a multipart request to another server server b it can be exposed to an attack where an extra multipart is inserted in the content of the request from server a causing server b to use the wrong value for a part it expects this could to lead privilege escalation for example if the part content represents a username or user roles vulnerabilityurl | 0 |
14,277 | 3,251,477,454 | IssuesEvent | 2015-10-19 10:00:34 | Mobicents/RestComm | https://api.github.com/repos/Mobicents/RestComm | closed | Toggle SSL verification policy using a configuration flag | Configuration in progress Security / Access Control / IDS/IPS Visual App Designer | Making https requests from Restcomm/RVD is often prevented when servers don't use certificates signed from a welknown CAs. A configuration flag can be used to toggle between a strict/loose SSL behaviour.
In *loose* mode https connections to untrusted sites will be allowed while in *strict* will not.
A further improvement could be to define trusted hostnames in the configuration file so that the administrator can easily define which hosts he trusts.
Separate implementation should be done for RVD and Restcomm.
| 1.0 | Toggle SSL verification policy using a configuration flag - Making https requests from Restcomm/RVD is often prevented when servers don't use certificates signed from a welknown CAs. A configuration flag can be used to toggle between a strict/loose SSL behaviour.
In *loose* mode https connections to untrusted sites will be allowed while in *strict* will not.
A further improvement could be to define trusted hostnames in the configuration file so that the administrator can easily define which hosts he trusts.
Separate implementation should be done for RVD and Restcomm.
| non_test | toggle ssl verification policy using a configuration flag making https requests from restcomm rvd is often prevented when servers don t use certificates signed from a welknown cas a configuration flag can be used to toggle between a strict loose ssl behaviour in loose mode https connections to untrusted sites will be allowed while in strict will not a further improvement could be to define trusted hostnames in the configuration file so that the administrator can easily define which hosts he trusts separate implementation should be done for rvd and restcomm | 0 |
82,172 | 23,698,468,333 | IssuesEvent | 2022-08-29 16:39:04 | dotnet/arcade | https://api.github.com/repos/dotnet/arcade | closed | Build failed: dotnet-arcade-validation-official/main #20220827.1 | Build Failed | Build [#20220827.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1972227) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Sun, 28 Aug 2022 01:44:13 GMT
**Duration** - 100 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1972227/logs/386) - The latest build on 'main' branch for the 'runtime' repository was not successful.
### Changes
- [640bcc4e](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/640bcc4e96854b5165a1e87b508f700f714e9b24) - dotnet-maestro[bot] - [main] Update dependencies from dotnet/arcade (#3323)
- [e9d451e7](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/e9d451e7b27f97ef3e1a57b2b7da4e987add34ec) - Matt Galbraith - Small test fix: If the killing processes logic hit an exception it may not enumerate all processes on the machine, leaving leaked dotnet.exes (#3326)
- [1a9356e1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/1a9356e1247a87ba47445bfa66c50222960ba878) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220826.3 (#3322)
- [01cf2999](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/01cf29991c732653560bd13a46b34cae92ee3382) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220826.2 (#3321)
| 1.0 | Build failed: dotnet-arcade-validation-official/main #20220827.1 - Build [#20220827.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1972227) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Sun, 28 Aug 2022 01:44:13 GMT
**Duration** - 100 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1972227/logs/386) - The latest build on 'main' branch for the 'runtime' repository was not successful.
### Changes
- [640bcc4e](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/640bcc4e96854b5165a1e87b508f700f714e9b24) - dotnet-maestro[bot] - [main] Update dependencies from dotnet/arcade (#3323)
- [e9d451e7](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/e9d451e7b27f97ef3e1a57b2b7da4e987add34ec) - Matt Galbraith - Small test fix: If the killing processes logic hit an exception it may not enumerate all processes on the machine, leaving leaked dotnet.exes (#3326)
- [1a9356e1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/1a9356e1247a87ba47445bfa66c50222960ba878) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220826.3 (#3322)
- [01cf2999](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/01cf29991c732653560bd13a46b34cae92ee3382) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20220826.2 (#3321)
| non_test | build failed dotnet arcade validation official main build partiallysucceeded warning internal dotnet arcade validation official partiallysucceeded summary finished sun aug gmt duration minutes requested for microsoft visualstudio services tfs reason schedule details promote arcade to net eng latest channel warning the latest build on main branch for the runtime repository was not successful changes dotnet maestro update dependencies from dotnet arcade matt galbraith small test fix if the killing processes logic hit an exception it may not enumerate all processes on the machine leaving leaked dotnet exes dotnet maestro update dependencies from build dotnet maestro update dependencies from build | 0 |
4,900 | 2,755,181,130 | IssuesEvent | 2015-04-26 12:07:28 | red/red | https://api.github.com/repos/red/red | closed | Parse iteration rules do not exit consistently | Red status.built status.tested type.review | ```
red>> count: 0 parse "123" [some [(count: count + 1) skip]] count
== 3
red>> count: 0 parse "aaab" [some [(count: count + 1) "a"] "b"] count
== 4
``` | 1.0 | Parse iteration rules do not exit consistently - ```
red>> count: 0 parse "123" [some [(count: count + 1) skip]] count
== 3
red>> count: 0 parse "aaab" [some [(count: count + 1) "a"] "b"] count
== 4
``` | test | parse iteration rules do not exit consistently red count parse count red count parse aaab b count | 1 |
91,152 | 18,354,245,543 | IssuesEvent | 2021-10-08 15:56:50 | google/web-stories-wp | https://api.github.com/repos/google/web-stories-wp | closed | Story Auto Analytics: Remove feature flag code | Type: Enhancement P2 Type: Code Quality Pod: WP & Infra Group: Analytics | <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Feature Description
<!-- A clear and concise description of what the problem is and what you want to happen. -->
After +1 release we want to completely remove the feature flag code.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
<!-- Add any other context or screenshots about the feature request. -->
| 1.0 | Story Auto Analytics: Remove feature flag code - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Feature Description
<!-- A clear and concise description of what the problem is and what you want to happen. -->
After +1 release we want to completely remove the feature flag code.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
<!-- Add any other context or screenshots about the feature request. -->
| non_test | story auto analytics remove feature flag code feature description after release we want to completely remove the feature flag code alternatives considered additional context | 0 |
15,994 | 2,870,250,144 | IssuesEvent | 2015-06-07 00:33:57 | pdelia/away3d | https://api.github.com/repos/pdelia/away3d | closed | screenZOffset does not work for Collada | auto-migrated Priority-Medium Type-Defect | #46 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:21Z
```
What steps will reproduce the problem?
var colladaObj:Object3DLoader = Collada.load(url);
colladaObj.ownCanvas = true;
colladaObj.screenZOffset = 10000;
view.scene.addChild(colladaObj);
What is the expected output? What do you see instead?
colladaObj should be drawn in the back, but it is not.
screenZOffset is ignored.
What version of the product are you using? On what operating system?
tested with svn trunk version. Windows XP.
Please provide any additional information below.
This bug is caused because screenZOffset is not properly copied.
The fix is to insert the following at line 156 of
away3d.loaders.Object3DLoader (as of rev 1398):
_result.screenZOffset = screenZOffset;
```
Original issue reported on code.google.com by `ichize...@gmail.com` on 11 Jun 2009 at 3:10 | 1.0 | screenZOffset does not work for Collada - #46 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:21Z
```
What steps will reproduce the problem?
var colladaObj:Object3DLoader = Collada.load(url);
colladaObj.ownCanvas = true;
colladaObj.screenZOffset = 10000;
view.scene.addChild(colladaObj);
What is the expected output? What do you see instead?
colladaObj should be drawn in the back, but it is not.
screenZOffset is ignored.
What version of the product are you using? On what operating system?
tested with svn trunk version. Windows XP.
Please provide any additional information below.
This bug is caused because screenZOffset is not properly copied.
The fix is to insert the following at line 156 of
away3d.loaders.Object3DLoader (as of rev 1398):
_result.screenZOffset = screenZOffset;
```
Original issue reported on code.google.com by `ichize...@gmail.com` on 11 Jun 2009 at 3:10 | non_test | screenzoffset does not work for collada issue by googlecodeexporter created on what steps will reproduce the problem var colladaobj collada load url colladaobj owncanvas true colladaobj screenzoffset view scene addchild colladaobj what is the expected output what do you see instead colladaobj should be drawn in the back but it is not screenzoffset is ignored what version of the product are you using on what operating system tested with svn trunk version windows xp please provide any additional information below this bug is caused because screenzoffset is not properly copied the fix is to insert the following at line of loaders as of rev result screenzoffset screenzoffset original issue reported on code google com by ichize gmail com on jun at | 0 |
556,854 | 16,493,054,421 | IssuesEvent | 2021-05-25 07:18:25 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | Host ZOMBIE VM Template section does not automatically update | Category: Core & System Category: Drivers - Monitor Priority: Normal Sponsored Status: Accepted Type: Bug | /!\ To report a **security issue** please follow this procedure:
[https://github.com/OpenNebula/one/wiki/Vulnerability-Management-Process]
**Description**
The Host Template section containing the list of Zombie VMs doesn't get even though the monitoring probes correctly report them periodically.
**To Reproduce**
- Create a VM
- Make it zombie
- wait (check for monitors.conf probe timers)
Example
```
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# tail -f /var/log/one/monitor.log
FREECPU=98
USEDCPU=2
NETRX=975210134
NETTX=33654758
HUGEPAGE = [ NODE_ID = "0", SIZE = "2048", FREE = "0" ]
HUGEPAGE = [ NODE_ID = "0", SIZE = "1048576", FREE = "0" ]
MEMORY_NODE = [ NODE_ID = "0", FREE = "173580", USED = "1086600" ]
Tue May 11 14:02:02 2021 [Z0][HMM][I]: Successfully monitored host: 0
Tue May 11 14:02:09 2021 [Z0][MDP][D]: [14:2:9] Recieved BEACON_HOST message from host 0:
1620741729
Tue May 11 14:02:24 2021 [Z0][MDP][D]: [14:2:24] Recieved BEACON_HOST message from host 1:
1620741744
Tue May 11 14:02:24 2021 [Z0][MDP][D]: [14:2:24] Recieved MONITOR_VM message from host 0:
VM = [ ID="2", DEPLOY_ID="499a9a65-c0ca-44ce-b407-0fa0375cf08a", MONITOR="Q1BVPSIxLjAxIgpNRU1PUlk9IjM0MTI4IgpORVRSWD0iMCIKTkVUVFg9IjAiCkRJU0tSREJZVEVTPSIwIgpESVNLV1JCWVRFUz0iMCIKRElTS1JESU9QUz0iMCIKRElTS1dSSU9QUz0iMCIK"]
VM = [ ID="3", DEPLOY_ID="49f512d9-6732-49b0-a90f-f024af189113", MONITOR="Q1BVPSIwLjAiCk1FTU9SWT0iMzM5MTIiCk5FVFJYPSIwIgpORVRUWD0iMCIKRElTS1JEQllURVM9IjAiCkRJU0tXUkJZVEVTPSIwIgpESVNLUkRJT1BTPSIwIgpESVNLV1JJT1BTPSIwIgo="]
VM = [ ID="4", DEPLOY_ID="61e04873-5e44-440c-ac56-5bb67bf19508", MONITOR="Q1BVPSIwLjAiCk1FTU9SWT0iMzQzNDAiCk5FVFJYPSIwIgpORVRUWD0iMCIKRElTS1JEQllURVM9IjAiCkRJU0tXUkJZVEVTPSIwIgpESVNLUkRJT1BTPSIwIgpESVNLV1JJT1BTPSIwIgo="]
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 2
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 3
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 4
^C
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# onehost show 0 -x | grep -i zombie
<TOTAL_ZOMBIES><![CDATA[2]]></TOTAL_ZOMBIES>
<ZOMBIES><![CDATA[2, 3]]></ZOMBIES>
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# onehost show 0 -x | grep -i zombie
<TOTAL_ZOMBIES><![CDATA[2]]></TOTAL_ZOMBIES>
<ZOMBIES><![CDATA[2, 3]]></ZOMBIES>
```
In the log it can be seen that `poll.rb` sends the information to monitord. In this example the zombie VM one-4 is being detected by the probes but not being shown on the host. Even after an opennebula daemon restart (oned)
**Expected behavior**
Every time poll.rb sends information about the VMs running on the host the Host zombie template should be updated.
**Details**
- Version: 5.12
**Additional context**
Might be related to #5245
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Branch created
- [x] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| 1.0 | Host ZOMBIE VM Template section does not automatically update - /!\ To report a **security issue** please follow this procedure:
[https://github.com/OpenNebula/one/wiki/Vulnerability-Management-Process]
**Description**
The Host Template section containing the list of Zombie VMs doesn't get even though the monitoring probes correctly report them periodically.
**To Reproduce**
- Create a VM
- Make it zombie
- wait (check for monitors.conf probe timers)
Example
```
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# tail -f /var/log/one/monitor.log
FREECPU=98
USEDCPU=2
NETRX=975210134
NETTX=33654758
HUGEPAGE = [ NODE_ID = "0", SIZE = "2048", FREE = "0" ]
HUGEPAGE = [ NODE_ID = "0", SIZE = "1048576", FREE = "0" ]
MEMORY_NODE = [ NODE_ID = "0", FREE = "173580", USED = "1086600" ]
Tue May 11 14:02:02 2021 [Z0][HMM][I]: Successfully monitored host: 0
Tue May 11 14:02:09 2021 [Z0][MDP][D]: [14:2:9] Recieved BEACON_HOST message from host 0:
1620741729
Tue May 11 14:02:24 2021 [Z0][MDP][D]: [14:2:24] Recieved BEACON_HOST message from host 1:
1620741744
Tue May 11 14:02:24 2021 [Z0][MDP][D]: [14:2:24] Recieved MONITOR_VM message from host 0:
VM = [ ID="2", DEPLOY_ID="499a9a65-c0ca-44ce-b407-0fa0375cf08a", MONITOR="Q1BVPSIxLjAxIgpNRU1PUlk9IjM0MTI4IgpORVRSWD0iMCIKTkVUVFg9IjAiCkRJU0tSREJZVEVTPSIwIgpESVNLV1JCWVRFUz0iMCIKRElTS1JESU9QUz0iMCIKRElTS1dSSU9QUz0iMCIK"]
VM = [ ID="3", DEPLOY_ID="49f512d9-6732-49b0-a90f-f024af189113", MONITOR="Q1BVPSIwLjAiCk1FTU9SWT0iMzM5MTIiCk5FVFJYPSIwIgpORVRUWD0iMCIKRElTS1JEQllURVM9IjAiCkRJU0tXUkJZVEVTPSIwIgpESVNLUkRJT1BTPSIwIgpESVNLV1JJT1BTPSIwIgo="]
VM = [ ID="4", DEPLOY_ID="61e04873-5e44-440c-ac56-5bb67bf19508", MONITOR="Q1BVPSIwLjAiCk1FTU9SWT0iMzQzNDAiCk5FVFJYPSIwIgpORVRUWD0iMCIKRElTS1JEQllURVM9IjAiCkRJU0tXUkJZVEVTPSIwIgpESVNLUkRJT1BTPSIwIgpESVNLV1JJT1BTPSIwIgo="]
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 2
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 3
Tue May 11 14:02:24 2021 [Z0][HMM][I]: Successfully monitored VM: 4
^C
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# onehost show 0 -x | grep -i zombie
<TOTAL_ZOMBIES><![CDATA[2]]></TOTAL_ZOMBIES>
<ZOMBIES><![CDATA[2, 3]]></ZOMBIES>
[root@centos8-kvm-ssh-5-12-8-07bc5-0 ~]# onehost show 0 -x | grep -i zombie
<TOTAL_ZOMBIES><![CDATA[2]]></TOTAL_ZOMBIES>
<ZOMBIES><![CDATA[2, 3]]></ZOMBIES>
```
In the log it can be seen that `poll.rb` sends the information to monitord. In this example the zombie VM one-4 is being detected by the probes but not being shown on the host. Even after an opennebula daemon restart (oned)
**Expected behavior**
Every time poll.rb sends information about the VMs running on the host the Host zombie template should be updated.
**Details**
- Version: 5.12
**Additional context**
Might be related to #5245
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [x] Branch created
- [x] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| non_test | host zombie vm template section does not automatically update to report a security issue please follow this procedure description the host template section containing the list of zombie vms doesn t get even though the monitoring probes correctly report them periodically to reproduce create a vm make it zombie wait check for monitors conf probe timers example tail f var log one monitor log freecpu usedcpu netrx nettx hugepage hugepage memory node tue may successfully monitored host tue may recieved beacon host message from host tue may recieved beacon host message from host tue may recieved monitor vm message from host vm vm vm tue may successfully monitored vm tue may successfully monitored vm tue may successfully monitored vm c onehost show x grep i zombie onehost show x grep i zombie in the log it can be seen that poll rb sends the information to monitord in this example the zombie vm one is being detected by the probes but not being shown on the host even after an opennebula daemon restart oned expected behavior every time poll rb sends information about the vms running on the host the host zombie template should be updated details version additional context might be related to progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches | 0 |
20,579 | 4,568,574,986 | IssuesEvent | 2016-09-15 14:49:52 | stcorp/harp | https://api.github.com/repos/stcorp/harp | closed | Move documentation of operation list expressions to HARP manual | documentation | At the moment, operation list expressions are documented as part of the HARP command line tools that use them (harpconvert, harpfilter). This documentation should be moved to a central place (the HARP manual). The usage information of the command line tools should then simply refer to the HARP manual for details. Perhaps it could still include a couple of examples for common tasks.
| 1.0 | Move documentation of operation list expressions to HARP manual - At the moment, operation list expressions are documented as part of the HARP command line tools that use them (harpconvert, harpfilter). This documentation should be moved to a central place (the HARP manual). The usage information of the command line tools should then simply refer to the HARP manual for details. Perhaps it could still include a couple of examples for common tasks.
| non_test | move documentation of operation list expressions to harp manual at the moment operation list expressions are documented as part of the harp command line tools that use them harpconvert harpfilter this documentation should be moved to a central place the harp manual the usage information of the command line tools should then simply refer to the harp manual for details perhaps it could still include a couple of examples for common tasks | 0 |
53,966 | 6,354,102,263 | IssuesEvent | 2017-07-29 05:53:30 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | closed | ng test wont finish & hangs | command: test need: investigation priority: 2 (required) type: bug | ### Bug Report or Feature Request (mark with an `x`)
```
- [X] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
```
@angular/cli: 1.2.3
node: 6.9.0
os: darwin x64
@angular/animations: 4.3.1
@angular/common: 4.3.1
@angular/compiler: 4.3.1
@angular/core: 4.3.1
@angular/forms: 4.3.1
@angular/http: 4.3.1
@angular/platform-browser: 4.3.1
@angular/platform-browser-dynamic: 4.3.1
@angular/router: 4.3.1
@angular/cli: 1.2.3
@angular/compiler-cli: 4.3.1
### my Karma.conf timeouts
captureTimeout: 180000,
browserDisconnectTimeout: 180000,
browserDisconnectTolerance: 3,
browserNoActivityTimeout: 180000,
reportSlowerThan: 2000
```
### Repro steps.
I have like ~300 unit test which used to execute without problem on previous version 1.1.2 and now when I turn them all on it just hangs and after couple minutes HTML reporter shows all success, but command line does not exists.
When I turn like 50 - 100, then it takes like 1.5 min. untill it start printing something to the console and then exists fine. What I see as difference even with 50 test compared what was before is it prints only final result like it start with ZERO
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 0 of 50 SUCCESS (0 secs / 0 secs)
```
and then
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 50 of 50 SUCCESS (0 secs / 0 secs)
```
But before it was more incremental I could see something like:
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 0 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 10 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 30 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 45 of 50 SUCCESS (0 secs / 0 secs)
..
```
It was more progress.
When I execute each individually all works.
### Desired functionality.
I wish to be able to execute again all my tests
Any pointers I can execute locally to identify the cause?
| 1.0 | ng test wont finish & hangs - ### Bug Report or Feature Request (mark with an `x`)
```
- [X] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
```
@angular/cli: 1.2.3
node: 6.9.0
os: darwin x64
@angular/animations: 4.3.1
@angular/common: 4.3.1
@angular/compiler: 4.3.1
@angular/core: 4.3.1
@angular/forms: 4.3.1
@angular/http: 4.3.1
@angular/platform-browser: 4.3.1
@angular/platform-browser-dynamic: 4.3.1
@angular/router: 4.3.1
@angular/cli: 1.2.3
@angular/compiler-cli: 4.3.1
### my Karma.conf timeouts
captureTimeout: 180000,
browserDisconnectTimeout: 180000,
browserDisconnectTolerance: 3,
browserNoActivityTimeout: 180000,
reportSlowerThan: 2000
```
### Repro steps.
I have like ~300 unit test which used to execute without problem on previous version 1.1.2 and now when I turn them all on it just hangs and after couple minutes HTML reporter shows all success, but command line does not exists.
When I turn like 50 - 100, then it takes like 1.5 min. untill it start printing something to the console and then exists fine. What I see as difference even with 50 test compared what was before is it prints only final result like it start with ZERO
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 0 of 50 SUCCESS (0 secs / 0 secs)
```
and then
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 50 of 50 SUCCESS (0 secs / 0 secs)
```
But before it was more incremental I could see something like:
```
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 0 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 10 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 30 of 50 SUCCESS (0 secs / 0 secs)
Chrome 59.0.3071 (Mac OS X 10.12.5): Executed 45 of 50 SUCCESS (0 secs / 0 secs)
..
```
It was more progress.
When I execute each individually all works.
### Desired functionality.
I wish to be able to execute again all my tests
Any pointers I can execute locally to identify the cause?
| test | ng test wont finish hangs bug report or feature request mark with an x bug report please search issues before submitting feature request versions angular cli node os darwin angular animations angular common angular compiler angular core angular forms angular http angular platform browser angular platform browser dynamic angular router angular cli angular compiler cli my karma conf timeouts capturetimeout browserdisconnecttimeout browserdisconnecttolerance browsernoactivitytimeout reportslowerthan repro steps i have like unit test which used to execute without problem on previous version and now when i turn them all on it just hangs and after couple minutes html reporter shows all success but command line does not exists when i turn like then it takes like min untill it start printing something to the console and then exists fine what i see as difference even with test compared what was before is it prints only final result like it start with zero chrome mac os x executed of success secs secs and then chrome mac os x executed of success secs secs but before it was more incremental i could see something like chrome mac os x executed of success secs secs chrome mac os x executed of success secs secs chrome mac os x executed of success secs secs chrome mac os x executed of success secs secs it was more progress when i execute each individually all works desired functionality i wish to be able to execute again all my tests any pointers i can execute locally to identify the cause | 1 |
831,270 | 32,043,609,296 | IssuesEvent | 2023-09-22 21:55:09 | Laravel-Backpack/community-forum | https://api.github.com/repos/Laravel-Backpack/community-forum | closed | [Refactor] Avoid var_exports in favor of using ternary operators | Priority: COULD refactoring | Avoid var_exports in favor of using ternary operators
Related with https://github.com/Laravel-Backpack/CRUD/pull/3334 | 1.0 | [Refactor] Avoid var_exports in favor of using ternary operators - Avoid var_exports in favor of using ternary operators
Related with https://github.com/Laravel-Backpack/CRUD/pull/3334 | non_test | avoid var exports in favor of using ternary operators avoid var exports in favor of using ternary operators related with | 0 |
667,099 | 22,408,991,501 | IssuesEvent | 2022-06-18 12:28:31 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | The Barrens - Brontus Missing Patrol Pattern | 1-19 Priority-Trivial | <!--
BEFORE you fill out an issue, make these checks:
- make sure you use an enUS client. We will just close your issue if you use enGB.
- close wow, clear your cache by deleting the CACHE folder in you wow directory, try to reproduce the bug
- if you have a suggestion, like increasing despawn timers, this is not an issue. Tell us in #suggestions on discord instead.
<!-- IF YOU DO NOT FILL OUT THIS TEMPLATE, WE WILL CLOSE YOUR ISSUE! -->
<!-- TYPE WHERE IT SAYS "TYPE HERE" -->
<!-- WRITE A RELEVANT TITLE -->
<!-- YOU CAN DRAG AND DROP IMAGES AND CONTROL-V SCREENSHOTS/VIDEOS INTO THIS REPORT -->
<!-- ATTENTION: WRITE ACCURATE REPORTS THAT INCLUDE VALID SOURCES -->
<!-- Example of an INVALID report: "this value is too low, we should increase it" -->
<!-- Example of a VALID report: "this value is X but it should be Y instead as explained in this link/video" -->
#### WHAT CLIENT DO YOU PLAY ON?
- [ ] enGB
- [x] enUS
- [ ] other (specify)
##### FACTION
<!-- AFTER YOU OPEN THE BUG REPORT, SELECT YOUR FACTION -->
- [x] Alliance
- [x] Horde
##### CONTENT PHASE:
<!-- AFTER YOU OPEN THE BUG REPORT, SELECT THE CONTENT PHASE (OR SELECT GENERIC) -->
- [ ] Generic
- [x] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [ ] 50-59
##### CURRENT BEHAVIOUR:
<!-- Describe the bug in detail. Database to link spells, NPCs, quests etc: https://wowgaming.altervista.org/aowow/ -->
Original report: https://github.com/chromiecraft/chromiecraft/issues/968
Brontus is stationary in his spawn point.

Tester: confirmed.

##### EXPECTED BLIZZLIKE BEHAVIOUR:
<!-- Describe how it should be working without the bug. Link to evidence if possible such as YouTube videos or WoWHead comments from the time. -->
He should be moving around.
Going by wowhead comments, he should be patrolling around with a group of 3 other kodos.
https://classic.wowhead.com/npc=5827/brontus#comments
https://www.wowhead.com/npc=5827/brontus#comments
presumably one of these groups that patrols in barrens:
northern one

GUID 15144 for Wooly Kodo
southern one:

GUID 15139 (one of the kodos)
##### SOURCE:
<!-- HEADS UP: include sources in your bug report which are relevant to the 3.3.5a game version,
we will close any bug like "X should be changed to Y" reported without sources -->
##### STEPS TO REPRODUCE THE PROBLEM:
<!-- Describe precisely how to reproduce the bug so we can fix it or confirm its existence:
- Which commands to use? Which NPC to teleport to?
- Do we need to have debug flags on Cmake?
- Do we need to look at the console while the bug happens?
- Other steps
- Use the ingame commands to identify the unique GUID of an ore/herb/npc: .npc info / .gobject near
-->
1. Teleport to GUID 51815
2. Observe his stationary nature
##### EXTRA NOTES:
<!--
Any information that can help the developers to identify and fix the issue should be put here.
Examples:
Links to items/NPCs/quests from https://wowgaming.altervista.org/aowow/
-->
<!--Thank you for your report. Please click submit new issue below.-->
<!-----------Remember to tick all relevant boxes when done!---------->
<!------------------------------------------------------------------->
<!------------------------------------------------------------------->
<!------------------ DO NOT MODIFY THE TEXT BELOW ------------------->
<!------------------------------------------------------------------->
<!------------------------------------------------------------------->
##### AC HASH/COMMIT:
https://github.com/chromiecraft/azerothcore-wotlk/commit/34da0cda5195dede48cb0406b23248330d6249cb
##### OPERATING SYSTEM:
Ubuntu 20.04
##### MODULES:
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker),
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-CarbonCopy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-LevelUpReward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-Zonecheck](https://github.com/55Honey/acore_Zonecheck)
##### OTHER CUSTOMIZATIONS:
None.
##### SERVER:
ChromieCraft
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/99451439-the-barrens-brontus-missing-patrol-pattern?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | The Barrens - Brontus Missing Patrol Pattern - <!--
BEFORE you fill out an issue, make these checks:
- make sure you use an enUS client. We will just close your issue if you use enGB.
- close wow, clear your cache by deleting the CACHE folder in you wow directory, try to reproduce the bug
- if you have a suggestion, like increasing despawn timers, this is not an issue. Tell us in #suggestions on discord instead.
<!-- IF YOU DO NOT FILL OUT THIS TEMPLATE, WE WILL CLOSE YOUR ISSUE! -->
<!-- TYPE WHERE IT SAYS "TYPE HERE" -->
<!-- WRITE A RELEVANT TITLE -->
<!-- YOU CAN DRAG AND DROP IMAGES AND CONTROL-V SCREENSHOTS/VIDEOS INTO THIS REPORT -->
<!-- ATTENTION: WRITE ACCURATE REPORTS THAT INCLUDE VALID SOURCES -->
<!-- Example of an INVALID report: "this value is too low, we should increase it" -->
<!-- Example of a VALID report: "this value is X but it should be Y instead as explained in this link/video" -->
#### WHAT CLIENT DO YOU PLAY ON?
- [ ] enGB
- [x] enUS
- [ ] other (specify)
##### FACTION
<!-- AFTER YOU OPEN THE BUG REPORT, SELECT YOUR FACTION -->
- [x] Alliance
- [x] Horde
##### CONTENT PHASE:
<!-- AFTER YOU OPEN THE BUG REPORT, SELECT THE CONTENT PHASE (OR SELECT GENERIC) -->
- [ ] Generic
- [x] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [ ] 50-59
##### CURRENT BEHAVIOUR:
<!-- Describe the bug in detail. Database to link spells, NPCs, quests etc: https://wowgaming.altervista.org/aowow/ -->
Original report: https://github.com/chromiecraft/chromiecraft/issues/968
Brontus is stationary in his spawn point.

Tester: confirmed.

##### EXPECTED BLIZZLIKE BEHAVIOUR:
<!-- Describe how it should be working without the bug. Link to evidence if possible such as YouTube videos or WoWHead comments from the time. -->
He should be moving around.
Going by wowhead comments, he should be patrolling around with a group of 3 other kodos.
https://classic.wowhead.com/npc=5827/brontus#comments
https://www.wowhead.com/npc=5827/brontus#comments
presumably one of these groups that patrols in barrens:
northern one

GUID 15144 for Wooly Kodo
southern one:

GUID 15139 (one of the kodos)
##### SOURCE:
<!-- HEADS UP: include sources in your bug report which are relevant to the 3.3.5a game version,
we will close any bug like "X should be changed to Y" reported without sources -->
##### STEPS TO REPRODUCE THE PROBLEM:
<!-- Describe precisely how to reproduce the bug so we can fix it or confirm its existence:
- Which commands to use? Which NPC to teleport to?
- Do we need to have debug flags on Cmake?
- Do we need to look at the console while the bug happens?
- Other steps
- Use the ingame commands to identify the unique GUID of an ore/herb/npc: .npc info / .gobject near
-->
1. Teleport to GUID 51815
2. Observe his stationary nature
##### EXTRA NOTES:
<!--
Any information that can help the developers to identify and fix the issue should be put here.
Examples:
Links to items/NPCs/quests from https://wowgaming.altervista.org/aowow/
-->
<!--Thank you for your report. Please click submit new issue below.-->
<!-----------Remember to tick all relevant boxes when done!---------->
<!------------------------------------------------------------------->
<!------------------------------------------------------------------->
<!------------------ DO NOT MODIFY THE TEXT BELOW ------------------->
<!------------------------------------------------------------------->
<!------------------------------------------------------------------->
##### AC HASH/COMMIT:
https://github.com/chromiecraft/azerothcore-wotlk/commit/34da0cda5195dede48cb0406b23248330d6249cb
##### OPERATING SYSTEM:
Ubuntu 20.04
##### MODULES:
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker),
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-CarbonCopy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-LevelUpReward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-Zonecheck](https://github.com/55Honey/acore_Zonecheck)
##### OTHER CUSTOMIZATIONS:
None.
##### SERVER:
ChromieCraft
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/99451439-the-barrens-brontus-missing-patrol-pattern?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_test | the barrens brontus missing patrol pattern before you fill out an issue make these checks make sure you use an enus client we will just close your issue if you use engb close wow clear your cache by deleting the cache folder in you wow directory try to reproduce the bug if you have a suggestion like increasing despawn timers this is not an issue tell us in suggestions on discord instead what client do you play on engb enus other specify faction alliance horde content phase generic current behaviour original report brontus is stationary in his spawn point tester confirmed expected blizzlike behaviour he should be moving around going by wowhead comments he should be patrolling around with a group of other kodos presumably one of these groups that patrols in barrens northern one guid for wooly kodo southern one guid one of the kodos source heads up include sources in your bug report which are relevant to the game version we will close any bug like x should be changed to y reported without sources steps to reproduce the problem describe precisely how to reproduce the bug so we can fix it or confirm its existence which commands to use which npc to teleport to do we need to have debug flags on cmake do we need to look at the console while the bug happens other steps use the ingame commands to identify the unique guid of an ore herb npc npc info gobject near teleport to guid observe his stationary nature extra notes any information that can help the developers to identify and fix the issue should be put here examples links to items npcs quests from ac hash commit operating system ubuntu modules other customizations none server chromiecraft want to back this issue we accept bounties via | 0 |
711,423 | 24,463,265,531 | IssuesEvent | 2022-10-07 13:03:34 | cloudflare/cloudflared | https://api.github.com/repos/cloudflare/cloudflared | opened | ๐ 502 HTTP errors on large number of files | Type: Bug Priority: Normal | **Describe the bug**
We are using cloudflared tunnels to reach some internal services secured using Zero trust and Access applications.
One of them is Kibana running with ECK.
Randomly we fail to load kibana UI which is secured using a kuberntes deployed cloudflare tunnel on the same cluster then Kibana.
Some JS files are impossible to load leading to 502.
If I use a port-forward eveyrthing is loading fine !
on the other side if I use a cloudflare tunnel directly deployed on EC2 everything is fine. So my theory is that a cloudflared deployed on Kubernetes can have some internal issues. But even using the tracing loglevel impossible to have a clear reason :
```
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Request content length 0
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Status: 200 OK served by ingress 8
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Response Headers map[Cache-Control:[max-age=31536000] Connection:[keep-alive] Content-Encoding:[gzip] Content-Type:[application/javascript; charset=utf-8] Date:[Fri, 07 Oct 2022 12:05:06 GMT] Kbn-License-Sig:[0bbc5af556468fe9fe09e1957c8f92785ed89b0239b47bc58124941e84cf5a83] Kbn-Name:[kube-logging] Keep-Alive:[timeout=120] Referrer-Policy:[no-referrer-when-downgrade] Vary:[accept-encoding] X-Content-Type-Options:[nosniff]]
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Response content length unknown
```
But we got some 502 on web browser side.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure a ECK stack and try to reach the kibana UI
2. Deploy a cloudflared tunnel to on Kubernetes configured to access Kibana UI
If it's an issue with Cloudflare Tunnel:
4. Tunnel ID : `9e216a1f-1828-4e91-8bdd-0c3a50ddb772`
5. cloudflared config:
```
- hostname: logs.internal.mydomain.co
service: https://kube-logging-kb-http.kube-logging:5601
originRequest:
noTLSVerify: true
```
**Expected behavior**
We should be able to load the Kibana UI without any issue
**Environment and versions**
- OS: Debian GNU/Linux 11
- Architecture: AMD
- Version: 2022.10.0
**Logs and errors**
502 errors when using cloudflared kubernetes pod tunnel.
**Additional context**
`CF-RAY: 756680c7b851d2f5-CDG`
I've tried to set those setting on the clouflared kubernetes pod but nothing change.
```
sysctl -w net.core.rmem_max=4000000
sysctl -w net.core.somaxconn=4096
``` | 1.0 | ๐ 502 HTTP errors on large number of files - **Describe the bug**
We are using cloudflared tunnels to reach some internal services secured using Zero trust and Access applications.
One of them is Kibana running with ECK.
Randomly we fail to load kibana UI which is secured using a kuberntes deployed cloudflare tunnel on the same cluster then Kibana.
Some JS files are impossible to load leading to 502.
If I use a port-forward eveyrthing is loading fine !
on the other side if I use a cloudflare tunnel directly deployed on EC2 everything is fine. So my theory is that a cloudflared deployed on Kubernetes can have some internal issues. But even using the tracing loglevel impossible to have a clear reason :
```
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Request content length 0
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Status: 200 OK served by ingress 8
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Response Headers map[Cache-Control:[max-age=31536000] Connection:[keep-alive] Content-Encoding:[gzip] Content-Type:[application/javascript; charset=utf-8] Date:[Fri, 07 Oct 2022 12:05:06 GMT] Kbn-License-Sig:[0bbc5af556468fe9fe09e1957c8f92785ed89b0239b47bc58124941e84cf5a83] Kbn-Name:[kube-logging] Keep-Alive:[timeout=120] Referrer-Policy:[no-referrer-when-downgrade] Vary:[accept-encoding] X-Content-Type-Options:[nosniff]]
2022-10-07T12:05:06Z DBG CF-RAY: 756680c7b851d2f5-CDG Response content length unknown
```
But we got some 502 on web browser side.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure a ECK stack and try to reach the kibana UI
2. Deploy a cloudflared tunnel to on Kubernetes configured to access Kibana UI
If it's an issue with Cloudflare Tunnel:
4. Tunnel ID : `9e216a1f-1828-4e91-8bdd-0c3a50ddb772`
5. cloudflared config:
```
- hostname: logs.internal.mydomain.co
service: https://kube-logging-kb-http.kube-logging:5601
originRequest:
noTLSVerify: true
```
**Expected behavior**
We should be able to load the Kibana UI without any issue
**Environment and versions**
- OS: Debian GNU/Linux 11
- Architecture: AMD
- Version: 2022.10.0
**Logs and errors**
502 errors when using cloudflared kubernetes pod tunnel.
**Additional context**
`CF-RAY: 756680c7b851d2f5-CDG`
I've tried to set those setting on the clouflared kubernetes pod but nothing change.
```
sysctl -w net.core.rmem_max=4000000
sysctl -w net.core.somaxconn=4096
``` | non_test | ๐ http errors on large number of files describe the bug we are using cloudflared tunnels to reach some internal services secured using zero trust and access applications one of them is kibana running with eck randomly we fail to load kibana ui which is secured using a kuberntes deployed cloudflare tunnel on the same cluster then kibana some js files are impossible to load leading to if i use a port forward eveyrthing is loading fine on the other side if i use a cloudflare tunnel directly deployed on everything is fine so my theory is that a cloudflared deployed on kubernetes can have some internal issues but even using the tracing loglevel impossible to have a clear reason dbg cf ray cdg request content length dbg cf ray cdg status ok served by ingress dbg cf ray cdg response headers map connection content encoding content type date kbn license sig kbn name keep alive referrer policy vary x content type options dbg cf ray cdg response content length unknown but we got some on web browser side to reproduce steps to reproduce the behavior configure a eck stack and try to reach the kibana ui deploy a cloudflared tunnel to on kubernetes configured to access kibana ui if it s an issue with cloudflare tunnel tunnel id cloudflared config hostname logs internal mydomain co service originrequest notlsverify true expected behavior we should be able to load the kibana ui without any issue environment and versions os debian gnu linux architecture amd version logs and errors errors when using cloudflared kubernetes pod tunnel additional context cf ray cdg i ve tried to set those setting on the clouflared kubernetes pod but nothing change sysctl w net core rmem max sysctl w net core somaxconn | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.