Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
193,431
| 14,652,988,708
|
IssuesEvent
|
2020-12-28 04:11:55
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
aliliin/Learn-golang: src/learn/ch15/remote_packge/remote_package_test.go; 4 LoC
|
fresh test tiny
|
Found a possible issue in [aliliin/Learn-golang](https://www.github.com/aliliin/Learn-golang) at [src/learn/ch15/remote_packge/remote_package_test.go](https://github.com/aliliin/Learn-golang/blob/43a913573ae4e276e14bbf43a1cf17a6dd990868/src/learn/ch15/remote_packge/remote_package_test.go#L36-L39)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to stu is reassigned at line 37
[Click here to see the code in its original context.](https://github.com/aliliin/Learn-golang/blob/43a913573ae4e276e14bbf43a1cf17a6dd990868/src/learn/ch15/remote_packge/remote_package_test.go#L36-L39)
<details>
<summary>Click here to show the 4 line(s) of Go which triggered the analyzer.</summary>
```go
for _, stu := range stus {
m[stu.Name] = &stu
fmt.Println(stu)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 43a913573ae4e276e14bbf43a1cf17a6dd990868
|
1.0
|
aliliin/Learn-golang: src/learn/ch15/remote_packge/remote_package_test.go; 4 LoC -
Found a possible issue in [aliliin/Learn-golang](https://www.github.com/aliliin/Learn-golang) at [src/learn/ch15/remote_packge/remote_package_test.go](https://github.com/aliliin/Learn-golang/blob/43a913573ae4e276e14bbf43a1cf17a6dd990868/src/learn/ch15/remote_packge/remote_package_test.go#L36-L39)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to stu is reassigned at line 37
[Click here to see the code in its original context.](https://github.com/aliliin/Learn-golang/blob/43a913573ae4e276e14bbf43a1cf17a6dd990868/src/learn/ch15/remote_packge/remote_package_test.go#L36-L39)
<details>
<summary>Click here to show the 4 line(s) of Go which triggered the analyzer.</summary>
```go
for _, stu := range stus {
m[stu.Name] = &stu
fmt.Println(stu)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 43a913573ae4e276e14bbf43a1cf17a6dd990868
|
test
|
aliliin learn golang src learn remote packge remote package test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to stu is reassigned at line click here to show the line s of go which triggered the analyzer go for stu range stus m stu fmt println stu leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
145,556
| 11,698,676,206
|
IssuesEvent
|
2020-03-06 14:19:24
|
MachoThemes/modula-importer
|
https://api.github.com/repos/MachoThemes/modula-importer
|
closed
|
Prepare for launch
|
enhancement needs testing
|
Dupa ce faci cele 2 issues ramase :
- Verifici sa fie totul sanitizat ok
- Schimbat text domain cu modula-best-grid-gallery
- Adaugat in Modula lite 2.2.7
|
1.0
|
Prepare for launch - Dupa ce faci cele 2 issues ramase :
- Verifici sa fie totul sanitizat ok
- Schimbat text domain cu modula-best-grid-gallery
- Adaugat in Modula lite 2.2.7
|
test
|
prepare for launch dupa ce faci cele issues ramase verifici sa fie totul sanitizat ok schimbat text domain cu modula best grid gallery adaugat in modula lite
| 1
|
30,675
| 4,643,418,035
|
IssuesEvent
|
2016-09-30 13:26:37
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Manual tests for Windows ia-32 0.12.3 RC2
|
OS/windows tests
|
## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, electron, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:preferences changing a preference takes effect right away
9. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [ ] Test that creating a bookmark on the bookmarks toolbar works
2. [ ] Test that creating a bookmark folder on the bookmarks toolbar works
3. [ ] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [ ] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [ ] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [ ] Make sure context menu items in the URL bar work
2. [ ] Make sure context menu items on content work with no selected text.
3. [ ] Make sure context menu items on content work with selected text.
4. [ ] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [ ] Ensure search box is shown with shortcut
2. [ ] Test successful find
3. [ ] Test forward and backward find navigation
4. [ ] Test failed find shows 0 results
5. [ ] Test match case find
## Geolocation
1. [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [ ] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [ ] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [ ] Test that tabs are pinnable
2. [ ] Test that tabs are unpinnable
3. [ ] Test that tabs are draggable to same tabset
4. [ ] Test that tabs are draggable to alternate tabset
## Zoom
1. [ ] Test zoom in / out shortcut works
2. [ ] Test hamburger menu zooms.
3. [ ] Test zoom saved when you close the browser and restore on a single site.
4. [ ] Test zoom saved when you navigate within a single origin site.
5. [ ] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [ ] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [ ] Check that ad replacement works on http://slashdot.org
4. [ ] Check that toggling to blocking and allow ads works as expected.
5. [ ] Test that clicking through a cert error in https://badssl.com/ works.
6. [ ] Test that Safe Browsing works (http://excellentmovies.net/)
7. [ ] Turning Safe Browsing off and shields off both disable safe browsing for http://excellentmovies.net/.
8. [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [ ] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [ ] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [ ] Open a github issue and type some misspellings, make sure they are underlined.
6. [ ] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [ ] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [ ] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [ ] Test that flash placeholder appears on http://www.y8.com/games/superfighters
## Autofill tests
1. [ ] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [ ] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [ ] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [ ] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [ ] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [ ] Check that disabling payments and enabling them again does not lose state.
## Per release specialty tests
- [x] Address-bar shifts at times ([#4309](https://github.com/brave/browser-laptop/issues/4309))
- [x] URL Bar resizes when refresh/stop button is loaded ([#4303](https://github.com/brave/browser-laptop/issues/4303))
- [ ] Change to brave server for our own hosted extensions ([#4123](https://github.com/brave/browser-laptop/issues/4123))
- [ ] Allow for auto updating extensions without releases extensions ([#4080](https://github.com/brave/browser-laptop/issues/4080))
- [ ] Install extensions remotely using default component installer instead of prepackaging them extensions ([#4081](https://github.com/brave/browser-laptop/issues/4081))
- [ ] Install / update extensions w/ component updater ([#4105](https://github.com/brave/browser-laptop/issues/4105))
- [x] pulldown on menubar cannot be opened until second click ([#4175](https://github.com/brave/browser-laptop/issues/4175))
- [x] Windows 7 - resize handle on top right corner is hard to click ([#4144](https://github.com/brave/browser-laptop/issues/4144))
- [x] Fixed draggable areas of window. ([#4296](https://github.com/brave/browser-laptop/issues/4296))
- [x] lots of regions on the titlebar can't be dragged but should be able to be dragged ([#4235](https://github.com/brave/browser-laptop/issues/4235))
- [x] Brave payments wrong contribution date ([#4058](https://github.com/brave/browser-laptop/issues/4058))
- [x] Reset old reconcileStamp when re-enabling Payments ([#4314](https://github.com/brave/browser-laptop/issues/4314))
- [x] Hide the load time from URL bar on all platforms if below the narrow breakpoint ([#4315](https://github.com/brave/browser-laptop/issues/4315))
- [x] hide page load time info in Address Bar below 600px window width enhancement ([#1046](https://github.com/brave/browser-laptop/issues/1046))
- [x] QR Code Popup from Add funds dialog scrolls independently from its parent ([#4043](https://github.com/brave/browser-laptop/issues/4043))
- [x] QR popup scrolling ([#4316](https://github.com/brave/browser-laptop/issues/4316))
- [ ] Ledger does not track windows created before the wallet ([#4274](https://github.com/brave/browser-laptop/issues/4274))
- [ ] ledger transaction history should have receipt links ([#3477](https://github.com/brave/browser-laptop/issues/3477))
- [ ] Payment history CSV receipt links #3477 (WIP) ([#4312](https://github.com/brave/browser-laptop/issues/4312))
- [ ] Enabling Block Scripts redirects twitter to Mobile site. ([#2884](https://github.com/brave/browser-laptop/issues/2884))
- [x] HTTPS count shown even when its disabled ([#4300](https://github.com/brave/browser-laptop/issues/4300))
- [x] use main frame url for webrequest first party url ([#4145](https://github.com/brave/browser-laptop/issues/4145))
- [x] unsafe use of firstPartyUrl ([#4137](https://github.com/brave/browser-laptop/issues/4137))
- [x] Bookmarks toolbar render is off when '>>' expander is used. ([#4272](https://github.com/brave/browser-laptop/issues/4272))
- [ ] Line should always appear when user scroll down ([#4085](https://github.com/brave/browser-laptop/issues/4085))
- [ ] While in fullscreen the navbar is not visible ([#4234](https://github.com/brave/browser-laptop/issues/4234))
- [x] Show top panel in fullscreen on mouse over ([#4193](https://github.com/brave/browser-laptop/issues/4193))
- [x] Update translation ([#4308](https://github.com/brave/browser-laptop/issues/4308))
- [x] No confirmation window of importing data from IE importer ([#4275](https://github.com/brave/browser-laptop/issues/4275))
- [x] Handle nested bookmarks folder ([#4293](https://github.com/brave/browser-laptop/issues/4293))
- [x] Importer should be able to handle nested folder from bookmarks properly ([#4291](https://github.com/brave/browser-laptop/issues/4291))
- [x] Make toolbar messages non-selectable ([#4306](https://github.com/brave/browser-laptop/issues/4306))
- [x] Make toolbar messages non-selectable ([#4126](https://github.com/brave/browser-laptop/issues/4126))
- [x] Update favicon of imported items ([#4270](https://github.com/brave/browser-laptop/issues/4270))
- [x] Update favicon from importer ([#4271](https://github.com/brave/browser-laptop/issues/4271))
- [x] Fix suggestions list overflow (unintentionally introduced when fixing resizing) ([#4299](https://github.com/brave/browser-laptop/issues/4299))
- [x] Suggestions Clipped ([#4298](https://github.com/brave/browser-laptop/issues/4298))
- [x] Add search shortcuts for MDN, GitHub, and Stack Overflow ([#4213](https://github.com/brave/browser-laptop/issues/4213))
- [x] Media queries for elements inside #urlInput need improving ([#4170](https://github.com/brave/browser-laptop/issues/4170))
- [x] Improvements to Title Bar ([#4188](https://github.com/brave/browser-laptop/issues/4284))
- [x] Shields and Windows button hidden in title mode when browser size is reduced titlebar ([#4241](https://github.com/brave/browser-laptop/issues/4241))
- [ ] Manually entered URL as bookmark doesn't show the bookmarked(orange) icon ([#4273](https://github.com/brave/browser-laptop/issues/4273))
- [ ] tracking issue: regressions caused by commit 35de6cf ([#4267](https://github.com/brave/browser-laptop/issues/4267))
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [ ] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [ ] Test that windows and tabs restore when closed, including active tab.
3. [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [ ] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [ ] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
|
1.0
|
Manual tests for Windows ia-32 0.12.3 RC2 - ## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, electron, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:preferences changing a preference takes effect right away
9. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [ ] Test that creating a bookmark on the bookmarks toolbar works
2. [ ] Test that creating a bookmark folder on the bookmarks toolbar works
3. [ ] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [ ] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [ ] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [ ] Make sure context menu items in the URL bar work
2. [ ] Make sure context menu items on content work with no selected text.
3. [ ] Make sure context menu items on content work with selected text.
4. [ ] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [ ] Ensure search box is shown with shortcut
2. [ ] Test successful find
3. [ ] Test forward and backward find navigation
4. [ ] Test failed find shows 0 results
5. [ ] Test match case find
## Geolocation
1. [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [ ] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [ ] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [ ] Test that tabs are pinnable
2. [ ] Test that tabs are unpinnable
3. [ ] Test that tabs are draggable to same tabset
4. [ ] Test that tabs are draggable to alternate tabset
## Zoom
1. [ ] Test zoom in / out shortcut works
2. [ ] Test hamburger menu zooms.
3. [ ] Test zoom saved when you close the browser and restore on a single site.
4. [ ] Test zoom saved when you navigate within a single origin site.
5. [ ] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [ ] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [ ] Check that ad replacement works on http://slashdot.org
4. [ ] Check that toggling to blocking and allow ads works as expected.
5. [ ] Test that clicking through a cert error in https://badssl.com/ works.
6. [ ] Test that Safe Browsing works (http://excellentmovies.net/)
7. [ ] Turning Safe Browsing off and shields off both disable safe browsing for http://excellentmovies.net/.
8. [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [ ] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [ ] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [ ] Open a github issue and type some misspellings, make sure they are underlined.
6. [ ] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [ ] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [ ] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [ ] Test that flash placeholder appears on http://www.y8.com/games/superfighters
## Autofill tests
1. [ ] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [ ] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [ ] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [ ] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [ ] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [ ] Check that disabling payments and enabling them again does not lose state.
## Per release specialty tests
- [x] Address-bar shifts at times ([#4309](https://github.com/brave/browser-laptop/issues/4309))
- [x] URL Bar resizes when refresh/stop button is loaded ([#4303](https://github.com/brave/browser-laptop/issues/4303))
- [ ] Change to brave server for our own hosted extensions ([#4123](https://github.com/brave/browser-laptop/issues/4123))
- [ ] Allow for auto updating extensions without releases extensions ([#4080](https://github.com/brave/browser-laptop/issues/4080))
- [ ] Install extensions remotely using default component installer instead of prepackaging them extensions ([#4081](https://github.com/brave/browser-laptop/issues/4081))
- [ ] Install / update extensions w/ component updater ([#4105](https://github.com/brave/browser-laptop/issues/4105))
- [x] pulldown on menubar cannot be opened until second click ([#4175](https://github.com/brave/browser-laptop/issues/4175))
- [x] Windows 7 - resize handle on top right corner is hard to click ([#4144](https://github.com/brave/browser-laptop/issues/4144))
- [x] Fixed draggable areas of window. ([#4296](https://github.com/brave/browser-laptop/issues/4296))
- [x] lots of regions on the titlebar can't be dragged but should be able to be dragged ([#4235](https://github.com/brave/browser-laptop/issues/4235))
- [x] Brave payments wrong contribution date ([#4058](https://github.com/brave/browser-laptop/issues/4058))
- [x] Reset old reconcileStamp when re-enabling Payments ([#4314](https://github.com/brave/browser-laptop/issues/4314))
- [x] Hide the load time from URL bar on all platforms if below the narrow breakpoint ([#4315](https://github.com/brave/browser-laptop/issues/4315))
- [x] hide page load time info in Address Bar below 600px window width enhancement ([#1046](https://github.com/brave/browser-laptop/issues/1046))
- [x] QR Code Popup from Add funds dialog scrolls independently from its parent ([#4043](https://github.com/brave/browser-laptop/issues/4043))
- [x] QR popup scrolling ([#4316](https://github.com/brave/browser-laptop/issues/4316))
- [ ] Ledger does not track windows created before the wallet ([#4274](https://github.com/brave/browser-laptop/issues/4274))
- [ ] ledger transaction history should have receipt links ([#3477](https://github.com/brave/browser-laptop/issues/3477))
- [ ] Payment history CSV receipt links #3477 (WIP) ([#4312](https://github.com/brave/browser-laptop/issues/4312))
- [ ] Enabling Block Scripts redirects twitter to Mobile site. ([#2884](https://github.com/brave/browser-laptop/issues/2884))
- [x] HTTPS count shown even when its disabled ([#4300](https://github.com/brave/browser-laptop/issues/4300))
- [x] use main frame url for webrequest first party url ([#4145](https://github.com/brave/browser-laptop/issues/4145))
- [x] unsafe use of firstPartyUrl ([#4137](https://github.com/brave/browser-laptop/issues/4137))
- [x] Bookmarks toolbar render is off when '>>' expander is used. ([#4272](https://github.com/brave/browser-laptop/issues/4272))
- [ ] Line should always appear when user scroll down ([#4085](https://github.com/brave/browser-laptop/issues/4085))
- [ ] While in fullscreen the navbar is not visible ([#4234](https://github.com/brave/browser-laptop/issues/4234))
- [x] Show top panel in fullscreen on mouse over ([#4193](https://github.com/brave/browser-laptop/issues/4193))
- [x] Update translation ([#4308](https://github.com/brave/browser-laptop/issues/4308))
- [x] No confirmation window of importing data from IE importer ([#4275](https://github.com/brave/browser-laptop/issues/4275))
- [x] Handle nested bookmarks folder ([#4293](https://github.com/brave/browser-laptop/issues/4293))
- [x] Importer should be able to handle nested folder from bookmarks properly ([#4291](https://github.com/brave/browser-laptop/issues/4291))
- [x] Make toolbar messages non-selectable ([#4306](https://github.com/brave/browser-laptop/issues/4306))
- [x] Make toolbar messages non-selectable ([#4126](https://github.com/brave/browser-laptop/issues/4126))
- [x] Update favicon of imported items ([#4270](https://github.com/brave/browser-laptop/issues/4270))
- [x] Update favicon from importer ([#4271](https://github.com/brave/browser-laptop/issues/4271))
- [x] Fix suggestions list overflow (unintentionally introduced when fixing resizing) ([#4299](https://github.com/brave/browser-laptop/issues/4299))
- [x] Suggestions Clipped ([#4298](https://github.com/brave/browser-laptop/issues/4298))
- [x] Add search shortcuts for MDN, GitHub, and Stack Overflow ([#4213](https://github.com/brave/browser-laptop/issues/4213))
- [x] Media queries for elements inside #urlInput need improving ([#4170](https://github.com/brave/browser-laptop/issues/4170))
- [x] Improvements to Title Bar ([#4188](https://github.com/brave/browser-laptop/issues/4284))
- [x] Shields and Windows button hidden in title mode when browser size is reduced titlebar ([#4241](https://github.com/brave/browser-laptop/issues/4241))
- [ ] Manually entered URL as bookmark doesn't show the bookmarked(orange) icon ([#4273](https://github.com/brave/browser-laptop/issues/4273))
- [ ] tracking issue: regressions caused by commit 35de6cf ([#4267](https://github.com/brave/browser-laptop/issues/4267))
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [ ] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [ ] Test that windows and tabs restore when closed, including active tab.
3. [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [ ] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [ ] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
|
test
|
manual tests for windows ia installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave electron and libchromiumcontent version in about and make sure it is exactly as expected data make sure that data from the last version appears in the new version ok test that the previous version s cookies are preserved in the next version about pages test that about adblock loads test that about autofill loads test that about bookmarks loads bookmarks test that about downloads loads downloads test that about extensions loads test that about history loads history test that about passwords loads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start bookmarks test that creating a bookmark on the bookmarks toolbar works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs and pinning test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open a github issue and type some misspellings make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run flash tests turn on flash in about preferences security test that clicking on install flash banner on myspace com shows a notification to allow flash and that the banner disappears when allow is clicked test that flash placeholder appears on autofill tests test that autofill works on ledger create a wallet with a value other than selected in the monthly budget dropdown click on the add funds button and check that coinbase transactions are blocked remove all ledger json files from library application support brave go to the payments tab in about preferences enable payments click on create wallet check that the add funds button appears after a wallet is created click on add funds and verify that adding funds through coinbase increases the account balance repeat the step above but add funds by scanning the qr code in a mobile bitcoin app instead of through coinbase visit nytimes com for a few seconds and make sure it shows up in the payments table go to and click the register button in the payments tab click add funds verify that the transfer funds button is visible and that clicking on transfer funds opens a jsfiddle url in a new tab go to and click unregister verify that the transfer funds button no longer appears in the add funds modal check that disabling payments and enabling them again does not lose state per release specialty tests address bar shifts at times url bar resizes when refresh stop button is loaded change to brave server for our own hosted extensions allow for auto updating extensions without releases extensions install extensions remotely using default component installer instead of prepackaging them extensions install update extensions w component updater pulldown on menubar cannot be opened until second click windows resize handle on top right corner is hard to click fixed draggable areas of window lots of regions on the titlebar can t be dragged but should be able to be dragged brave payments wrong contribution date reset old reconcilestamp when re enabling payments hide the load time from url bar on all platforms if below the narrow breakpoint hide page load time info in address bar below window width enhancement qr code popup from add funds dialog scrolls independently from its parent qr popup scrolling ledger does not track windows created before the wallet ledger transaction history should have receipt links payment history csv receipt links wip enabling block scripts redirects twitter to mobile site https count shown even when its disabled use main frame url for webrequest first party url unsafe use of firstpartyurl bookmarks toolbar render is off when expander is used line should always appear when user scroll down while in fullscreen the navbar is not visible show top panel in fullscreen on mouse over update translation no confirmation window of importing data from ie importer handle nested bookmarks folder importer should be able to handle nested folder from bookmarks properly make toolbar messages non selectable make toolbar messages non selectable update favicon of imported items update favicon from importer fix suggestions list overflow unintentionally introduced when fixing resizing suggestions clipped add search shortcuts for mdn github and stack overflow media queries for elements inside urlinput need improving improvements to title bar shields and windows button hidden in title mode when browser size is reduced titlebar manually entered url as bookmark doesn t show the bookmarked orange icon tracking issue regressions caused by commit session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly
| 1
|
77,809
| 7,604,256,717
|
IssuesEvent
|
2018-04-29 23:03:48
|
ehmurray8/KytonUI
|
https://api.github.com/repos/ehmurray8/KytonUI
|
closed
|
Reset Config Folder
|
bug needtesting
|
Need to add a script that writes empty config files if they do not exists.
|
1.0
|
Reset Config Folder - Need to add a script that writes empty config files if they do not exists.
|
test
|
reset config folder need to add a script that writes empty config files if they do not exists
| 1
|
239,755
| 18,283,094,342
|
IssuesEvent
|
2021-10-05 07:12:28
|
PyTorchLightning/pytorch-lightning
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
|
closed
|
Multi GPU training docs should show batch-size behaviour
|
documentation
|
I am fairly new to multi-GPU training, and it took me quite a while to understand that when using DDP the effective batch size scales with `num_gpus * batch_size`.
In fact, I thought that the batch-size that I provide to the lightning data module would be divided by the number of GPUs so that I often ran into memory errors.
Maybe this could be more clarified in the docs. https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html
|
1.0
|
Multi GPU training docs should show batch-size behaviour - I am fairly new to multi-GPU training, and it took me quite a while to understand that when using DDP the effective batch size scales with `num_gpus * batch_size`.
In fact, I thought that the batch-size that I provide to the lightning data module would be divided by the number of GPUs so that I often ran into memory errors.
Maybe this could be more clarified in the docs. https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html
|
non_test
|
multi gpu training docs should show batch size behaviour i am fairly new to multi gpu training and it took me quite a while to understand that when using ddp the effective batch size scales with num gpus batch size in fact i thought that the batch size that i provide to the lightning data module would be divided by the number of gpus so that i often ran into memory errors maybe this could be more clarified in the docs
| 0
|
70,529
| 7,190,450,056
|
IssuesEvent
|
2018-02-02 17:16:58
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test: System.Tests.ArrayTests/CreateInstance_NotSupportedType_ThrowsNotSupportedException failed with "System.ArgumentException"
|
test-run-uwp-ilc
|
Opened on behalf of @Jiayili1
The test `System.Tests.ArrayTests/CreateInstance_NotSupportedType_ThrowsNotSupportedException` has failed.
System.ArgumentException : The type 'System.Tests.ArrayTests+GenericClass`1[T]' may not be used as a type argument.
Stack Trace:
at System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.ConstructedGenericTypeTable.Factory($UnificationKey key) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\General\TypeUnifier.cs:line 430
at System.Collections.Concurrent.ConcurrentUnifierWKeyed$2<System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.UnificationKey,System.__Canon>.GetOrAdd($UnificationKey key) in f:\dd\ndp\fxcore\CoreRT\src\Common\src\System\Collections\Concurrent\ConcurrentUnifierWKeyed.cs:line 136
at System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.GetRuntimeConstructedGenericTypeInfo($RuntimeTypeInfo genericTypeDefinition, $RuntimeTypeInfo[] genericTypeArguments, RuntimeTypeHandle precomputedTypeHandle) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\General\TypeUnifier.cs:line 381
at System.Reflection.Runtime.TypeInfos.RuntimeTypeInfo.MakeGenericType(Type[] typeArguments) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\TypeInfos\RuntimeTypeInfo.cs:line 16707566
at System.Tests.ArrayTests.<CreateInstance_NotSupportedType_TestData>d__66.MoveNext() in E:\A\_work\357\s\corefx\src\System.Runtime\tests\System\ArrayTests.cs:line 1618
at System.Linq.Enumerable.SelectEnumerableIterator$2<System.__Canon,System.__Canon>.MoveNext() in E:\A\_work\357\s\corefx\src\System.Linq\src\System\Linq\Select.cs:line 129
Build : Master - 20170901.01 (UWP ILC Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS3-x64
- Debug
- Release
- Windows.10.Amd64.ClientRS3-x86
- Release
- Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Filc~2F/build/20170901.01/workItem/System.Runtime.Tests/analysis/xunit/System.Tests.ArrayTests~2FCreateInstance_NotSupportedType_ThrowsNotSupportedException
|
1.0
|
Test: System.Tests.ArrayTests/CreateInstance_NotSupportedType_ThrowsNotSupportedException failed with "System.ArgumentException" - Opened on behalf of @Jiayili1
The test `System.Tests.ArrayTests/CreateInstance_NotSupportedType_ThrowsNotSupportedException` has failed.
System.ArgumentException : The type 'System.Tests.ArrayTests+GenericClass`1[T]' may not be used as a type argument.
Stack Trace:
at System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.ConstructedGenericTypeTable.Factory($UnificationKey key) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\General\TypeUnifier.cs:line 430
at System.Collections.Concurrent.ConcurrentUnifierWKeyed$2<System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.UnificationKey,System.__Canon>.GetOrAdd($UnificationKey key) in f:\dd\ndp\fxcore\CoreRT\src\Common\src\System\Collections\Concurrent\ConcurrentUnifierWKeyed.cs:line 136
at System.Reflection.Runtime.TypeInfos.RuntimeConstructedGenericTypeInfo.GetRuntimeConstructedGenericTypeInfo($RuntimeTypeInfo genericTypeDefinition, $RuntimeTypeInfo[] genericTypeArguments, RuntimeTypeHandle precomputedTypeHandle) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\General\TypeUnifier.cs:line 381
at System.Reflection.Runtime.TypeInfos.RuntimeTypeInfo.MakeGenericType(Type[] typeArguments) in f:\dd\ndp\fxcore\CoreRT\src\System.Private.Reflection.Core\src\System\Reflection\Runtime\TypeInfos\RuntimeTypeInfo.cs:line 16707566
at System.Tests.ArrayTests.<CreateInstance_NotSupportedType_TestData>d__66.MoveNext() in E:\A\_work\357\s\corefx\src\System.Runtime\tests\System\ArrayTests.cs:line 1618
at System.Linq.Enumerable.SelectEnumerableIterator$2<System.__Canon,System.__Canon>.MoveNext() in E:\A\_work\357\s\corefx\src\System.Linq\src\System\Linq\Select.cs:line 129
Build : Master - 20170901.01 (UWP ILC Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS3-x64
- Debug
- Release
- Windows.10.Amd64.ClientRS3-x86
- Release
- Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Filc~2F/build/20170901.01/workItem/System.Runtime.Tests/analysis/xunit/System.Tests.ArrayTests~2FCreateInstance_NotSupportedType_ThrowsNotSupportedException
|
test
|
test system tests arraytests createinstance notsupportedtype throwsnotsupportedexception failed with system argumentexception opened on behalf of the test system tests arraytests createinstance notsupportedtype throwsnotsupportedexception has failed system argumentexception the type system tests arraytests genericclass may not be used as a type argument stack trace at system reflection runtime typeinfos runtimeconstructedgenerictypeinfo constructedgenerictypetable factory unificationkey key in f dd ndp fxcore corert src system private reflection core src system reflection runtime general typeunifier cs line at system collections concurrent concurrentunifierwkeyed getoradd unificationkey key in f dd ndp fxcore corert src common src system collections concurrent concurrentunifierwkeyed cs line at system reflection runtime typeinfos runtimeconstructedgenerictypeinfo getruntimeconstructedgenerictypeinfo runtimetypeinfo generictypedefinition runtimetypeinfo generictypearguments runtimetypehandle precomputedtypehandle in f dd ndp fxcore corert src system private reflection core src system reflection runtime general typeunifier cs line at system reflection runtime typeinfos runtimetypeinfo makegenerictype type typearguments in f dd ndp fxcore corert src system private reflection core src system reflection runtime typeinfos runtimetypeinfo cs line at system tests arraytests d movenext in e a work s corefx src system runtime tests system arraytests cs line at system linq enumerable selectenumerableiterator movenext in e a work s corefx src system linq src system linq select cs line build master uwp ilc tests failing configurations windows debug release windows release debug detail
| 1
|
450,941
| 32,000,192,147
|
IssuesEvent
|
2023-09-21 11:47:56
|
ilia-brykin/aloha
|
https://api.github.com/repos/ilia-brykin/aloha
|
closed
|
Documentation: Language choice deselectable
|
documentation
|
**Steps to reproduce:**
Open Aloha. In the upper right corner, there is a langage selection. Try to deselect current language.
**Expected Result:**
Deselect not possible
**Current behavior:**
Language can be deselected, resulting in placeholders being replaced with an 'undefined 'string.
|
1.0
|
Documentation: Language choice deselectable - **Steps to reproduce:**
Open Aloha. In the upper right corner, there is a langage selection. Try to deselect current language.
**Expected Result:**
Deselect not possible
**Current behavior:**
Language can be deselected, resulting in placeholders being replaced with an 'undefined 'string.
|
non_test
|
documentation language choice deselectable steps to reproduce open aloha in the upper right corner there is a langage selection try to deselect current language expected result deselect not possible current behavior language can be deselected resulting in placeholders being replaced with an undefined string
| 0
|
303,464
| 26,209,361,961
|
IssuesEvent
|
2023-01-04 04:01:52
|
datafuselabs/openraft
|
https://api.github.com/repos/datafuselabs/openraft
|
opened
|
Build deterministic tests based on turmoil
|
test
|
https://tokio.rs/blog/2023-01-03-announcing-turmoil
> Today, we are happy to announce the initial release of [`[turmoil](https://crates.io/crates/turmoil)`](https://crates.io/crates/turmoil), a framework for developing and testing distributed systems.
>
> Testing distributed systems is hard. Non-determinism is everywhere (network, time, threads, etc.), making reproducible results difficult to achieve. Development cycles are lengthy due to deployments. All these factors slow down development and make it difficult to ensure system correctness.
>
> `turmoil` strives to solve these problems by simulating hosts, time and the network. This allows for an entire distributed system to run within a single process on a single thread, achieving deterministic execution. We also provide fine grain control over the network, with support for dropping, holding and delaying messages between hosts.
|
1.0
|
Build deterministic tests based on turmoil - https://tokio.rs/blog/2023-01-03-announcing-turmoil
> Today, we are happy to announce the initial release of [`[turmoil](https://crates.io/crates/turmoil)`](https://crates.io/crates/turmoil), a framework for developing and testing distributed systems.
>
> Testing distributed systems is hard. Non-determinism is everywhere (network, time, threads, etc.), making reproducible results difficult to achieve. Development cycles are lengthy due to deployments. All these factors slow down development and make it difficult to ensure system correctness.
>
> `turmoil` strives to solve these problems by simulating hosts, time and the network. This allows for an entire distributed system to run within a single process on a single thread, achieving deterministic execution. We also provide fine grain control over the network, with support for dropping, holding and delaying messages between hosts.
|
test
|
build deterministic tests based on turmoil today we are happy to announce the initial release of a framework for developing and testing distributed systems testing distributed systems is hard non determinism is everywhere network time threads etc making reproducible results difficult to achieve development cycles are lengthy due to deployments all these factors slow down development and make it difficult to ensure system correctness turmoil strives to solve these problems by simulating hosts time and the network this allows for an entire distributed system to run within a single process on a single thread achieving deterministic execution we also provide fine grain control over the network with support for dropping holding and delaying messages between hosts
| 1
|
21,433
| 4,709,906,798
|
IssuesEvent
|
2016-10-14 08:13:44
|
Sylius/Sylius
|
https://api.github.com/repos/Sylius/Sylius
|
closed
|
Gateway variants
|
Documentation
|
Good afternoon!
I do not know when it happened, but in my local copy and in your demo site disappeared from the gateway list all options except "Offline". I think Sylius must have at least one option preconfigured Omnipay gateway.
|
1.0
|
Gateway variants - Good afternoon!
I do not know when it happened, but in my local copy and in your demo site disappeared from the gateway list all options except "Offline". I think Sylius must have at least one option preconfigured Omnipay gateway.
|
non_test
|
gateway variants good afternoon i do not know when it happened but in my local copy and in your demo site disappeared from the gateway list all options except offline i think sylius must have at least one option preconfigured omnipay gateway
| 0
|
421,607
| 28,349,119,099
|
IssuesEvent
|
2023-04-12 00:26:19
|
HFOSSedu/GitKit-FarmData2
|
https://api.github.com/repos/HFOSSedu/GitKit-FarmData2
|
opened
|
Link for Pull Requests
|
documentation enhancement Round1 Links
|
Make the phrase "Make a Pull Request" into a link to documentation about how to make a pull request. This phrase is in the final bullet point of the "Workflow" section of the CONTRIBUTING.md file.
Use the link: https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request
Information about making a link in Markdown can be found here: https://docs.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#links
|
1.0
|
Link for Pull Requests - Make the phrase "Make a Pull Request" into a link to documentation about how to make a pull request. This phrase is in the final bullet point of the "Workflow" section of the CONTRIBUTING.md file.
Use the link: https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request
Information about making a link in Markdown can be found here: https://docs.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#links
|
non_test
|
link for pull requests make the phrase make a pull request into a link to documentation about how to make a pull request this phrase is in the final bullet point of the workflow section of the contributing md file use the link information about making a link in markdown can be found here
| 0
|
18,262
| 4,241,904,402
|
IssuesEvent
|
2016-07-06 17:46:33
|
Comcast/traffic_control
|
https://api.github.com/repos/Comcast/traffic_control
|
opened
|
Invalid regex causes Traffic Router to refuse DNS requests
|
bug documentation duplicate enhancement Traffic Router
|
The following exception occurred when an invalid regex was specified on a delivery service. It was a HOST regex at index 0. The exception occurred in such a way that Traffic Router was refusing all queries, which in practice, means that it has no zone data.
Fix the issue and ensure that an invalid regex only affects the delivery service on which it was configured, and log a big nasty fatal message.
This issue is related to and should be fixed along with #514.
```INFO 2016-07-06T17:21:35.219 [New I/O worker #19] com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager - Generating zone data
FATAL 2016-07-06T17:21:35.220 [New I/O worker #19] com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager - Caught fatal exception while generating zone data!
org.xbill.DNS.TextParseException: '.\.some-ds-regex\..': invalid empty label
at org.xbill.DNS.Name.parseException(Name.java:172)
at org.xbill.DNS.Name.<init>(Name.java:251)
at org.xbill.DNS.Name.<init>(Name.java:288)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.newName(ZoneManager.java:572)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.populateZoneMap(ZoneManager.java:622)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.generateZones(ZoneManager.java:349)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.initZoneCache(ZoneManager.java:172)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.<init>(ZoneManager.java:110)
at com.comcast.cdn.traffic_control.traffic_router.core.router.TrafficRouter.<init>(TrafficRouter.java:104)
at com.comcast.cdn.traffic_control.traffic_router.core.router.TrafficRouterManager.setCacheRegister(TrafficRouterManager.java:101)
```
|
1.0
|
Invalid regex causes Traffic Router to refuse DNS requests - The following exception occurred when an invalid regex was specified on a delivery service. It was a HOST regex at index 0. The exception occurred in such a way that Traffic Router was refusing all queries, which in practice, means that it has no zone data.
Fix the issue and ensure that an invalid regex only affects the delivery service on which it was configured, and log a big nasty fatal message.
This issue is related to and should be fixed along with #514.
```INFO 2016-07-06T17:21:35.219 [New I/O worker #19] com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager - Generating zone data
FATAL 2016-07-06T17:21:35.220 [New I/O worker #19] com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager - Caught fatal exception while generating zone data!
org.xbill.DNS.TextParseException: '.\.some-ds-regex\..': invalid empty label
at org.xbill.DNS.Name.parseException(Name.java:172)
at org.xbill.DNS.Name.<init>(Name.java:251)
at org.xbill.DNS.Name.<init>(Name.java:288)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.newName(ZoneManager.java:572)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.populateZoneMap(ZoneManager.java:622)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.generateZones(ZoneManager.java:349)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.initZoneCache(ZoneManager.java:172)
at com.comcast.cdn.traffic_control.traffic_router.core.dns.ZoneManager.<init>(ZoneManager.java:110)
at com.comcast.cdn.traffic_control.traffic_router.core.router.TrafficRouter.<init>(TrafficRouter.java:104)
at com.comcast.cdn.traffic_control.traffic_router.core.router.TrafficRouterManager.setCacheRegister(TrafficRouterManager.java:101)
```
|
non_test
|
invalid regex causes traffic router to refuse dns requests the following exception occurred when an invalid regex was specified on a delivery service it was a host regex at index the exception occurred in such a way that traffic router was refusing all queries which in practice means that it has no zone data fix the issue and ensure that an invalid regex only affects the delivery service on which it was configured and log a big nasty fatal message this issue is related to and should be fixed along with info com comcast cdn traffic control traffic router core dns zonemanager generating zone data fatal com comcast cdn traffic control traffic router core dns zonemanager caught fatal exception while generating zone data org xbill dns textparseexception some ds regex invalid empty label at org xbill dns name parseexception name java at org xbill dns name name java at org xbill dns name name java at com comcast cdn traffic control traffic router core dns zonemanager newname zonemanager java at com comcast cdn traffic control traffic router core dns zonemanager populatezonemap zonemanager java at com comcast cdn traffic control traffic router core dns zonemanager generatezones zonemanager java at com comcast cdn traffic control traffic router core dns zonemanager initzonecache zonemanager java at com comcast cdn traffic control traffic router core dns zonemanager zonemanager java at com comcast cdn traffic control traffic router core router trafficrouter trafficrouter java at com comcast cdn traffic control traffic router core router trafficroutermanager setcacheregister trafficroutermanager java
| 0
|
11,204
| 2,641,743,199
|
IssuesEvent
|
2015-03-11 19:29:19
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
chrome dev tool part 2 tutorial
|
Milestone-2 Priority-Medium Tutorial Type-Defect
|
Original [issue 110](https://code.google.com/p/html5rocks/issues/detail?id=110) created by chrsmith on 2010-07-29T04:16:56.000Z:
<b>What steps will reproduce the problem?</b>
<b>1.</b>
<b>2.</b>
<b>3.</b>
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
1.0
|
chrome dev tool part 2 tutorial - Original [issue 110](https://code.google.com/p/html5rocks/issues/detail?id=110) created by chrsmith on 2010-07-29T04:16:56.000Z:
<b>What steps will reproduce the problem?</b>
<b>1.</b>
<b>2.</b>
<b>3.</b>
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
non_test
|
chrome dev tool part tutorial original created by chrsmith on what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information
| 0
|
110,534
| 4,428,349,162
|
IssuesEvent
|
2016-08-17 01:44:01
|
DanielArndt/vim
|
https://api.github.com/repos/DanielArndt/vim
|
opened
|
Vim line numbers / distance in lines from cursor toggle
|
priority-low size-small
|
* Allow toggling from line numbers to the number of lines from cursor
|
1.0
|
Vim line numbers / distance in lines from cursor toggle - * Allow toggling from line numbers to the number of lines from cursor
|
non_test
|
vim line numbers distance in lines from cursor toggle allow toggling from line numbers to the number of lines from cursor
| 0
|
194,447
| 22,261,990,202
|
IssuesEvent
|
2022-06-10 01:57:07
|
ShaikUsaf/linux-4.19.72_CVE-2020-10757
|
https://api.github.com/repos/ShaikUsaf/linux-4.19.72_CVE-2020-10757
|
reopened
|
CVE-2019-19767 (Medium) detected in linuxlinux-4.19.236, linuxlinux-4.19.236
|
security vulnerability
|
## CVE-2019-19767 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.236</b>, <b>linuxlinux-4.19.236</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.4.2 mishandles ext4_expand_extra_isize, as demonstrated by use-after-free errors in __ext4_expand_extra_isize and ext4_xattr_set_entry, related to fs/ext4/inode.c and fs/ext4/super.c, aka CID-4ea99936a163.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19767>CVE-2019-19767</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19767">https://www.linuxkernelcves.com/cves/CVE-2019-19767</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: v5.5-rc1,v3.16.81,v4.14.158,v4.19.88,v4.4.211,v4.9.211,v5.3.15,v5.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-19767 (Medium) detected in linuxlinux-4.19.236, linuxlinux-4.19.236 - ## CVE-2019-19767 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.236</b>, <b>linuxlinux-4.19.236</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.4.2 mishandles ext4_expand_extra_isize, as demonstrated by use-after-free errors in __ext4_expand_extra_isize and ext4_xattr_set_entry, related to fs/ext4/inode.c and fs/ext4/super.c, aka CID-4ea99936a163.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19767>CVE-2019-19767</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19767">https://www.linuxkernelcves.com/cves/CVE-2019-19767</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: v5.5-rc1,v3.16.81,v4.14.158,v4.19.88,v4.4.211,v4.9.211,v5.3.15,v5.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in linuxlinux linuxlinux cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details the linux kernel before mishandles expand extra isize as demonstrated by use after free errors in expand extra isize and xattr set entry related to fs inode c and fs super c aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
26,347
| 4,217,140,002
|
IssuesEvent
|
2016-06-30 12:00:06
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
closed
|
Use tokens on client
|
gold-ticket ReadyToTest
|
We want to support tokens in the client.
Users logs in and the client gets a token from the server. And uses that.
This should be in the 9.1 server. We will know more details on monday.
|
1.0
|
Use tokens on client - We want to support tokens in the client.
Users logs in and the client gets a token from the server. And uses that.
This should be in the 9.1 server. We will know more details on monday.
|
test
|
use tokens on client we want to support tokens in the client users logs in and the client gets a token from the server and uses that this should be in the server we will know more details on monday
| 1
|
82,925
| 15,681,759,450
|
IssuesEvent
|
2021-03-25 06:03:39
|
LalithK90/processManagement
|
https://api.github.com/repos/LalithK90/processManagement
|
opened
|
CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.30.jar
|
security vulnerability
|
## CVE-2021-25329 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: processManagement/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/processManagement/commit/427d2a81b9587dfae6aa004538406d073778baf4">427d2a81b9587dfae6aa004538406d073778baf4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2021-25329 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: processManagement/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/processManagement/commit/427d2a81b9587dfae6aa004538406d073778baf4">427d2a81b9587dfae6aa004538406d073778baf4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file processmanagement build gradle path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat step up your open source security game with whitesource
| 0
|
134,407
| 10,914,475,412
|
IssuesEvent
|
2019-11-21 09:16:21
|
ckeditor/ckeditor4-angular
|
https://api.github.com/repos/ckeditor/ckeditor4-angular
|
opened
|
IE11/Edge unstable "DemoForm Component should create" unit test
|
confirmed failing test
|
## Are you reporting a feature request or a bug?
Failing test
## Provide detailed reproduction steps (if any)
Unstable `DemoForm Component` `should create` test which sometimes fails on IE11/Edge. I have added longer Jasmine timeouts but looking at the log, two timeouts are mentioned in the error (default and the longer one) so I'm not sure if it works correctly and should be investigated:

|
1.0
|
IE11/Edge unstable "DemoForm Component should create" unit test - ## Are you reporting a feature request or a bug?
Failing test
## Provide detailed reproduction steps (if any)
Unstable `DemoForm Component` `should create` test which sometimes fails on IE11/Edge. I have added longer Jasmine timeouts but looking at the log, two timeouts are mentioned in the error (default and the longer one) so I'm not sure if it works correctly and should be investigated:

|
test
|
edge unstable demoform component should create unit test are you reporting a feature request or a bug failing test provide detailed reproduction steps if any unstable demoform component should create test which sometimes fails on edge i have added longer jasmine timeouts but looking at the log two timeouts are mentioned in the error default and the longer one so i m not sure if it works correctly and should be investigated
| 1
|
440,026
| 30,726,865,711
|
IssuesEvent
|
2023-07-27 20:27:14
|
WordPress/Documentation-Issue-Tracker
|
https://api.github.com/repos/WordPress/Documentation-Issue-Tracker
|
opened
|
[HelpHub]: Replace "More rich text editing" content with detailed page link in existing articles
|
user documentation (HelpHub) [Status] To do
|
Review existing articles and replace the More rich text editing content/screenshots and details with a link to the page - https://wordpress.org/documentation/article/more-text-editing-overview/. This helps to maintain the changes easily.
Please add a link to the pages that are updated, in the comments
## General
- [ ] Update the changelog at the end of the article
|
1.0
|
[HelpHub]: Replace "More rich text editing" content with detailed page link in existing articles - Review existing articles and replace the More rich text editing content/screenshots and details with a link to the page - https://wordpress.org/documentation/article/more-text-editing-overview/. This helps to maintain the changes easily.
Please add a link to the pages that are updated, in the comments
## General
- [ ] Update the changelog at the end of the article
|
non_test
|
replace more rich text editing content with detailed page link in existing articles review existing articles and replace the more rich text editing content screenshots and details with a link to the page this helps to maintain the changes easily please add a link to the pages that are updated in the comments general update the changelog at the end of the article
| 0
|
173,427
| 14,410,493,219
|
IssuesEvent
|
2020-12-04 05:01:29
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
Spring Boot example reports "running on IBM Cloud"
|
kind/documentation priority/Medium
|
/kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
MacOS
**Output of `odo version`:**
odo v2.0.0 (6fbb9d9bf)
From this example/doc
https://odo.dev/docs/deploying-a-devfile-using-odo/
and specifically in
git clone https://github.com/odo-devfiles/springboot-ex
## Actual behavior
Seen in this screenshot
https://www.screencast.com/t/3ZPzjw1e1c
## Expected behavior
should be something related to "odo", not IBM cloud
## Any logs, error output, etc?
|
1.0
|
Spring Boot example reports "running on IBM Cloud" - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
MacOS
**Output of `odo version`:**
odo v2.0.0 (6fbb9d9bf)
From this example/doc
https://odo.dev/docs/deploying-a-devfile-using-odo/
and specifically in
git clone https://github.com/odo-devfiles/springboot-ex
## Actual behavior
Seen in this screenshot
https://www.screencast.com/t/3ZPzjw1e1c
## Expected behavior
should be something related to "odo", not IBM cloud
## Any logs, error output, etc?
|
non_test
|
spring boot example reports running on ibm cloud kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system macos output of odo version odo from this example doc and specifically in git clone actual behavior seen in this screenshot expected behavior should be something related to odo not ibm cloud any logs error output etc
| 0
|
158,299
| 24,820,417,801
|
IssuesEvent
|
2022-10-25 16:00:10
|
Swift-Coding-Club/Team_Nav
|
https://api.github.com/repos/Swift-Coding-Club/Team_Nav
|
opened
|
[Design] 메인 지도 UI 추가
|
design
|
### 설명
- 메인 지도의 UI를 추가합니다.
## 예상 디자인
<img width="300" alt="스크린샷 2022-10-26 00 58 38" src="https://user-images.githubusercontent.com/81027256/197823426-ccb0ab94-9ac2-486f-b91c-aff34f71e6e2.png">
### 예상 작업 기간
~ 10/26 (수)
|
1.0
|
[Design] 메인 지도 UI 추가 - ### 설명
- 메인 지도의 UI를 추가합니다.
## 예상 디자인
<img width="300" alt="스크린샷 2022-10-26 00 58 38" src="https://user-images.githubusercontent.com/81027256/197823426-ccb0ab94-9ac2-486f-b91c-aff34f71e6e2.png">
### 예상 작업 기간
~ 10/26 (수)
|
non_test
|
메인 지도 ui 추가 설명 메인 지도의 ui를 추가합니다 예상 디자인 img width alt 스크린샷 src 예상 작업 기간 수
| 0
|
42,530
| 6,988,135,740
|
IssuesEvent
|
2017-12-14 11:45:23
|
pkrog/biodb
|
https://api.github.com/repos/pkrog/biodb
|
closed
|
msmsSearch better example
|
documentation msms
|
In the vignette named ms-search, the output of the msmsSearch example, using massbank, is full of 0 values in the score column. Need to find a more suitable example.
|
1.0
|
msmsSearch better example - In the vignette named ms-search, the output of the msmsSearch example, using massbank, is full of 0 values in the score column. Need to find a more suitable example.
|
non_test
|
msmssearch better example in the vignette named ms search the output of the msmssearch example using massbank is full of values in the score column need to find a more suitable example
| 0
|
22,008
| 18,273,611,207
|
IssuesEvent
|
2021-10-04 16:10:07
|
DataBiosphere/toil
|
https://api.github.com/repos/DataBiosphere/toil
|
closed
|
Issues launching AWS clusters
|
usability
|
Good Afternoon Team,
Hope all is well!
Wanted to reach out as I am experiencing issues with launching toil on AWS clusters as of late:
```
toil launch-cluster xxx --leaderNodeType t2.2xlarge -z us-east-2a --keyPairName xxx --leaderStorage 100
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker registry of quay.io/ucsc_cgl as TOIL_DOCKER_REGISTRY is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker name of toil as TOIL_DOCKER_NAME is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker appliance of quay.io/ucsc_cgl/toil:5.4.0-87293d63fa6c76f03bed3adf93414ffee67bf9a7-py3.6 as TOIL_APPLIANCE_SELF is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil.utils.toilLaunchCluster] Creating cluster xxx...
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default user-defined custom docker init command of as TOIL_CUSTOM_DOCKER_INIT_COMMAND is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default user-defined custom init command of as TOIL_CUSTOM_INIT_COMMAND is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker registry of quay.io/ucsc_cgl as TOIL_DOCKER_REGISTRY is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker name of toil as TOIL_DOCKER_NAME is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker appliance of quay.io/ucsc_cgl/toil:5.4.0-87293d63fa6c76f03bed3adf93414ffee67bf9a7-py3.6 as TOIL_APPLIANCE_SELF is not set.
[2021-09-14T11:39:45-0400] [MainThread] [I] [toil.lib.ec2] Selected Flatcar AMI: ami-07e82385de8861b75
[2021-09-14T11:39:45-0400] [MainThread] [I] [toil.lib.ec2] Creating t2.2xlarge instance(s) ...
[2021-09-14T11:39:51-0400] [MainThread] [I] [toil.lib.ec2] Creating t2.2xlarge instance(s) ...
[2021-09-14T11:40:26-0400] [MainThread] [I] [toil.provisioners.node] Attempting to establish SSH connection...
[2021-09-14T11:40:27-0400] [MainThread] [I] [toil.provisioners.node] ...SSH connection established.
[2021-09-14T11:40:27-0400] [MainThread] [I] [toil.provisioners.node] Waiting for docker on xxx to start...
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] Docker daemon running
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] Waiting for toil_leader Toil appliance to start...
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:40:48-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:09-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:30-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:50-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:11-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:31-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:52-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:12-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:33-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:53-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:13-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:34-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:54-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:15-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:35-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:56-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:16-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:37-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:57-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:47:18-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
Traceback (most recent call last):
File "/Users/jpuerto/toil-test/venv/bin/toil", line 8, in <module>
sys.exit(main())
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/utils/toilMain.py", line 31, in main
get_or_die(module, 'main')()
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/utils/toilLaunchCluster.py", line 168, in main
awsEc2ExtraSecurityGroupIds=options.awsEc2ExtraSecurityGroupIds)
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/aws/awsProvisioner.py", line 298, in launchCluster
leaderNode.waitForNode('toil_leader')
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/node.py", line 75, in waitForNode
self._waitForAppliance(role=role, keyName=keyName)
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/node.py", line 171, in _waitForAppliance
"\nCheck if TOIL_APPLIANCE_SELF is set correctly and the container exists.")
RuntimeError: Appliance failed to start on machine with IP: xxxx
Check if TOIL_APPLIANCE_SELF is set correctly and the container exists.
```
Any ideas on what might be going on here? I just updated to latest toil this morning. Please let me know if there is any additional information that might help with debugging.
Best regards,
Juan
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-1013)
┆Issue Number: TOIL-1013
|
True
|
Issues launching AWS clusters - Good Afternoon Team,
Hope all is well!
Wanted to reach out as I am experiencing issues with launching toil on AWS clusters as of late:
```
toil launch-cluster xxx --leaderNodeType t2.2xlarge -z us-east-2a --keyPairName xxx --leaderStorage 100
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker registry of quay.io/ucsc_cgl as TOIL_DOCKER_REGISTRY is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker name of toil as TOIL_DOCKER_NAME is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil] Using default docker appliance of quay.io/ucsc_cgl/toil:5.4.0-87293d63fa6c76f03bed3adf93414ffee67bf9a7-py3.6 as TOIL_APPLIANCE_SELF is not set.
[2021-09-14T11:39:41-0400] [MainThread] [I] [toil.utils.toilLaunchCluster] Creating cluster xxx...
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default user-defined custom docker init command of as TOIL_CUSTOM_DOCKER_INIT_COMMAND is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default user-defined custom init command of as TOIL_CUSTOM_INIT_COMMAND is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker registry of quay.io/ucsc_cgl as TOIL_DOCKER_REGISTRY is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker name of toil as TOIL_DOCKER_NAME is not set.
[2021-09-14T11:39:44-0400] [MainThread] [I] [toil] Using default docker appliance of quay.io/ucsc_cgl/toil:5.4.0-87293d63fa6c76f03bed3adf93414ffee67bf9a7-py3.6 as TOIL_APPLIANCE_SELF is not set.
[2021-09-14T11:39:45-0400] [MainThread] [I] [toil.lib.ec2] Selected Flatcar AMI: ami-07e82385de8861b75
[2021-09-14T11:39:45-0400] [MainThread] [I] [toil.lib.ec2] Creating t2.2xlarge instance(s) ...
[2021-09-14T11:39:51-0400] [MainThread] [I] [toil.lib.ec2] Creating t2.2xlarge instance(s) ...
[2021-09-14T11:40:26-0400] [MainThread] [I] [toil.provisioners.node] Attempting to establish SSH connection...
[2021-09-14T11:40:27-0400] [MainThread] [I] [toil.provisioners.node] ...SSH connection established.
[2021-09-14T11:40:27-0400] [MainThread] [I] [toil.provisioners.node] Waiting for docker on xxx to start...
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] Docker daemon running
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] Waiting for toil_leader Toil appliance to start...
[2021-09-14T11:40:28-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:40:48-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:09-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:30-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:41:50-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:11-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:31-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:42:52-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:12-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:33-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:43:53-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:13-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:34-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:44:54-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:15-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:35-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:45:56-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:16-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:37-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:46:57-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
[2021-09-14T11:47:18-0400] [MainThread] [I] [toil.provisioners.node] ...Still waiting for appliance, trying again in 20 sec...
Traceback (most recent call last):
File "/Users/jpuerto/toil-test/venv/bin/toil", line 8, in <module>
sys.exit(main())
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/utils/toilMain.py", line 31, in main
get_or_die(module, 'main')()
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/utils/toilLaunchCluster.py", line 168, in main
awsEc2ExtraSecurityGroupIds=options.awsEc2ExtraSecurityGroupIds)
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/aws/awsProvisioner.py", line 298, in launchCluster
leaderNode.waitForNode('toil_leader')
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/node.py", line 75, in waitForNode
self._waitForAppliance(role=role, keyName=keyName)
File "/Users/jpuerto/toil-test/venv/lib/python3.7/site-packages/toil/provisioners/node.py", line 171, in _waitForAppliance
"\nCheck if TOIL_APPLIANCE_SELF is set correctly and the container exists.")
RuntimeError: Appliance failed to start on machine with IP: xxxx
Check if TOIL_APPLIANCE_SELF is set correctly and the container exists.
```
Any ideas on what might be going on here? I just updated to latest toil this morning. Please let me know if there is any additional information that might help with debugging.
Best regards,
Juan
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-1013)
┆Issue Number: TOIL-1013
|
non_test
|
issues launching aws clusters good afternoon team hope all is well wanted to reach out as i am experiencing issues with launching toil on aws clusters as of late toil launch cluster xxx leadernodetype z us east keypairname xxx leaderstorage using default docker registry of quay io ucsc cgl as toil docker registry is not set using default docker name of toil as toil docker name is not set using default docker appliance of quay io ucsc cgl toil as toil appliance self is not set creating cluster xxx using default user defined custom docker init command of as toil custom docker init command is not set using default user defined custom init command of as toil custom init command is not set using default docker registry of quay io ucsc cgl as toil docker registry is not set using default docker name of toil as toil docker name is not set using default docker appliance of quay io ucsc cgl toil as toil appliance self is not set selected flatcar ami ami creating instance s creating instance s attempting to establish ssh connection ssh connection established waiting for docker on xxx to start docker daemon running waiting for toil leader toil appliance to start still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec still waiting for appliance trying again in sec traceback most recent call last file users jpuerto toil test venv bin toil line in sys exit main file users jpuerto toil test venv lib site packages toil utils toilmain py line in main get or die module main file users jpuerto toil test venv lib site packages toil utils toillaunchcluster py line in main options file users jpuerto toil test venv lib site packages toil provisioners aws awsprovisioner py line in launchcluster leadernode waitfornode toil leader file users jpuerto toil test venv lib site packages toil provisioners node py line in waitfornode self waitforappliance role role keyname keyname file users jpuerto toil test venv lib site packages toil provisioners node py line in waitforappliance ncheck if toil appliance self is set correctly and the container exists runtimeerror appliance failed to start on machine with ip xxxx check if toil appliance self is set correctly and the container exists any ideas on what might be going on here i just updated to latest toil this morning please let me know if there is any additional information that might help with debugging best regards juan ┆issue is synchronized with this ┆issue number toil
| 0
|
131,506
| 10,697,528,670
|
IssuesEvent
|
2019-10-23 16:42:49
|
microsoft/MixedRealityToolkit-Unity
|
https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity
|
closed
|
Unreliable test PointerBehaviorTests.TestGrab
|
Bug Current Iteration Tests
|
## Overview
PointerBehaviorTests.TestGrab failed from unrelated changes in PR
#6168. The failure was: https://github.com/microsoft/MixedRealityToolkit-Unity/pull/6186/checks?check_run_id=245737592
Error was:
```
Incorrect state for Left Line Pointer.IsInteractionEnabled
Expected: False
But was: True
```
|
1.0
|
Unreliable test PointerBehaviorTests.TestGrab - ## Overview
PointerBehaviorTests.TestGrab failed from unrelated changes in PR
#6168. The failure was: https://github.com/microsoft/MixedRealityToolkit-Unity/pull/6186/checks?check_run_id=245737592
Error was:
```
Incorrect state for Left Line Pointer.IsInteractionEnabled
Expected: False
But was: True
```
|
test
|
unreliable test pointerbehaviortests testgrab overview pointerbehaviortests testgrab failed from unrelated changes in pr the failure was error was incorrect state for left line pointer isinteractionenabled expected false but was true
| 1
|
41,774
| 10,602,193,986
|
IssuesEvent
|
2019-10-10 13:48:28
|
google/guava
|
https://api.github.com/repos/google/guava
|
closed
|
MediaType::toString can produce unparsable results
|
P2 package=net type=defect
|
I've found cases where `MediaType::toString` generates erroneous results.
One case is when the `MediaType` instance has a parameter of an empty value. This can happen when calling `withParameter("foo", "")`, or parsing `text/plain; foo=""`. When the parameters are later joined, the value is erroneously matched as a token instead of being quoted (as tokens cannot be empty). This will lead the following to throw `IllegalArgumentException`:
```java
MediaType type1 = MediaType.parse("text/plain; foo=\"\"");
MediaType type2 = MediaType.parse(type1.toString()); // Trying to parse 'text/plain; foo=' will fail
```
I think [this line](https://github.com/google/guava/blob/5a8f19bd3556012ed9e65cd4268a85ddde95733f/guava/src/com/google/common/net/MediaType.java#L1091) should first check if the string is empty before matching against `TOKEN_MATCHER`, or maybe just return the literal `"\"\""` on that case to avoid the `StringBuilder` allocation.
Another case is when parameters are set explicitly using `withParameter[s](...)`, where parameter value[s] contain non-ASCII characters. For example this will also throw `IllegalArgumentException`:
```java
MediaType type1 = MediaType.create("text", "plain").withParameter("let_me_in!", "\"جوافة\"");
MediaType type2 = MediaType.parse(type1.toString()); // The tokenizer will fail to consume QUOTED_TEXT_MATCHER
```
I don't know whether this is the intended behaviour or not but `MediaType::toString` can be used in HTTP header values which shouldn't contain non-ASCII octets. IMHO `normalizeParameterValue(...)` should validate the value against the `ascii()` matcher.
|
1.0
|
MediaType::toString can produce unparsable results - I've found cases where `MediaType::toString` generates erroneous results.
One case is when the `MediaType` instance has a parameter of an empty value. This can happen when calling `withParameter("foo", "")`, or parsing `text/plain; foo=""`. When the parameters are later joined, the value is erroneously matched as a token instead of being quoted (as tokens cannot be empty). This will lead the following to throw `IllegalArgumentException`:
```java
MediaType type1 = MediaType.parse("text/plain; foo=\"\"");
MediaType type2 = MediaType.parse(type1.toString()); // Trying to parse 'text/plain; foo=' will fail
```
I think [this line](https://github.com/google/guava/blob/5a8f19bd3556012ed9e65cd4268a85ddde95733f/guava/src/com/google/common/net/MediaType.java#L1091) should first check if the string is empty before matching against `TOKEN_MATCHER`, or maybe just return the literal `"\"\""` on that case to avoid the `StringBuilder` allocation.
Another case is when parameters are set explicitly using `withParameter[s](...)`, where parameter value[s] contain non-ASCII characters. For example this will also throw `IllegalArgumentException`:
```java
MediaType type1 = MediaType.create("text", "plain").withParameter("let_me_in!", "\"جوافة\"");
MediaType type2 = MediaType.parse(type1.toString()); // The tokenizer will fail to consume QUOTED_TEXT_MATCHER
```
I don't know whether this is the intended behaviour or not but `MediaType::toString` can be used in HTTP header values which shouldn't contain non-ASCII octets. IMHO `normalizeParameterValue(...)` should validate the value against the `ascii()` matcher.
|
non_test
|
mediatype tostring can produce unparsable results i ve found cases where mediatype tostring generates erroneous results one case is when the mediatype instance has a parameter of an empty value this can happen when calling withparameter foo or parsing text plain foo when the parameters are later joined the value is erroneously matched as a token instead of being quoted as tokens cannot be empty this will lead the following to throw illegalargumentexception java mediatype mediatype parse text plain foo mediatype mediatype parse tostring trying to parse text plain foo will fail i think should first check if the string is empty before matching against token matcher or maybe just return the literal on that case to avoid the stringbuilder allocation another case is when parameters are set explicitly using withparameter where parameter value contain non ascii characters for example this will also throw illegalargumentexception java mediatype mediatype create text plain withparameter let me in جوافة mediatype mediatype parse tostring the tokenizer will fail to consume quoted text matcher i don t know whether this is the intended behaviour or not but mediatype tostring can be used in http header values which shouldn t contain non ascii octets imho normalizeparametervalue should validate the value against the ascii matcher
| 0
|
23,997
| 4,055,469,185
|
IssuesEvent
|
2016-05-24 15:33:53
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
stress: failed test in cockroach/cli/cli.test: TestQuit
|
Robot test-failure
|
Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/08d664015764b5b04fc09946d0588e16f8651cca
Stress build found a failed test:
```
=== RUN TestQuit
ERROR: exit status 255
```
Run Details:
```
0 runs so far, 0 failures, over 5s
8 runs so far, 0 failures, over 10s
12 runs completed, 1 failures, over 11s
FAIL
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
stress: failed test in cockroach/cli/cli.test: TestQuit - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/08d664015764b5b04fc09946d0588e16f8651cca
Stress build found a failed test:
```
=== RUN TestQuit
ERROR: exit status 255
```
Run Details:
```
0 runs so far, 0 failures, over 5s
8 runs so far, 0 failures, over 10s
12 runs completed, 1 failures, over 11s
FAIL
```
Please assign, take a look and update the issue accordingly.
|
test
|
stress failed test in cockroach cli cli test testquit binary cockroach static tests tar gz sha stress build found a failed test run testquit error exit status run details runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly
| 1
|
322,655
| 27,623,164,327
|
IssuesEvent
|
2023-03-10 03:02:48
|
shridhar-tl/jira-assistant
|
https://api.github.com/repos/shridhar-tl/jira-assistant
|
closed
|
Request to add Summary Tab in new Worklog report
|
enhancement in live testing
|
### Checklist before you being
- [X] I am sure that I am already using latest version of Jira Assistant
- [X] I had verified that there are no existing requests with similar suggestion in [issue tracker](https://github.com/shridhar-tl/jira-assistant/issues)
- [X] I had verified that, my query is not answered in [FAQ section of website](https://www.jiraassistant.com/faq)
### How do you use Jira Assistant?
Browser extension
### Are you using cloud version of Jira or self hosted (data center / server) of Jira.
Cloud Jira
### Version of Jira Assistant
v2.44
### What browser are you using?
Chrome
### Feature Suggestion
This is with reference issue#260
Shridhar,
Thanks for your quick response. I see the grouping options are given already. But Summary Tab is unique and was very helpful. It gives us the insights about the total worklog of user as well as the Project in one single view in horizontal and vertical ways.
In grouping option, for example if I group by Project, I will get a worklog Project wise only. If I need to check for one user, I need to again switch the grouping. In Summary tab, As everything was in single wise, it helps us to create a report and submit it to our Management.
Grouped Option :

Worklog Option (Here it more readable as it is horizontal and vertical manner) :

Also, the old worklog will get removed and it will be there always?
### Checklist before you submit
- [X] I have ensured not to paste any confidential information like Jira url, Mail id, etc.
- [X] I have added required screenshots (as necessary)
|
1.0
|
Request to add Summary Tab in new Worklog report - ### Checklist before you being
- [X] I am sure that I am already using latest version of Jira Assistant
- [X] I had verified that there are no existing requests with similar suggestion in [issue tracker](https://github.com/shridhar-tl/jira-assistant/issues)
- [X] I had verified that, my query is not answered in [FAQ section of website](https://www.jiraassistant.com/faq)
### How do you use Jira Assistant?
Browser extension
### Are you using cloud version of Jira or self hosted (data center / server) of Jira.
Cloud Jira
### Version of Jira Assistant
v2.44
### What browser are you using?
Chrome
### Feature Suggestion
This is with reference issue#260
Shridhar,
Thanks for your quick response. I see the grouping options are given already. But Summary Tab is unique and was very helpful. It gives us the insights about the total worklog of user as well as the Project in one single view in horizontal and vertical ways.
In grouping option, for example if I group by Project, I will get a worklog Project wise only. If I need to check for one user, I need to again switch the grouping. In Summary tab, As everything was in single wise, it helps us to create a report and submit it to our Management.
Grouped Option :

Worklog Option (Here it more readable as it is horizontal and vertical manner) :

Also, the old worklog will get removed and it will be there always?
### Checklist before you submit
- [X] I have ensured not to paste any confidential information like Jira url, Mail id, etc.
- [X] I have added required screenshots (as necessary)
|
test
|
request to add summary tab in new worklog report checklist before you being i am sure that i am already using latest version of jira assistant i had verified that there are no existing requests with similar suggestion in i had verified that my query is not answered in how do you use jira assistant browser extension are you using cloud version of jira or self hosted data center server of jira cloud jira version of jira assistant what browser are you using chrome feature suggestion this is with reference issue shridhar thanks for your quick response i see the grouping options are given already but summary tab is unique and was very helpful it gives us the insights about the total worklog of user as well as the project in one single view in horizontal and vertical ways in grouping option for example if i group by project i will get a worklog project wise only if i need to check for one user i need to again switch the grouping in summary tab as everything was in single wise it helps us to create a report and submit it to our management grouped option worklog option here it more readable as it is horizontal and vertical manner also the old worklog will get removed and it will be there always checklist before you submit i have ensured not to paste any confidential information like jira url mail id etc i have added required screenshots as necessary
| 1
|
351,969
| 32,039,646,715
|
IssuesEvent
|
2023-09-22 18:10:24
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix sorting.test_numpy_argsort
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix sorting.test_numpy_argsort - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6276803759/job/17047271131"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix sorting test numpy argsort numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 1
|
262,272
| 22,828,098,413
|
IssuesEvent
|
2022-07-12 10:23:20
|
foundry-rs/foundry
|
https://api.github.com/repos/foundry-rs/foundry
|
closed
|
Allow forking from different RPCs/block numbers when testing
|
A-evm T-feature Cmd-forge-test C-forge Cmd-forge-debug
|
### Component
Forge
### Describe the feature you would like
Currently if you want to test against mainnet state, you can pass in an RPC url and block number to fork from. However it isn't possible to run tests on multiple networks or fork from different block numbers.
For example, you might want 1 test to run against the state at block number 10000, and another at block number 20000.
Hardhat has a method called `hardhat_reset` which allows you to change the RPC url/block number mainnet state is forked from: https://hardhat.org/hardhat-network/guides/mainnet-forking.html#resetting-the-fork
A cheatcode similar to that would be useful. It could look like this:
```solidity
hevm.reset("https://infura.io/....", 12345);
```
### Additional context
_No response_
|
1.0
|
Allow forking from different RPCs/block numbers when testing - ### Component
Forge
### Describe the feature you would like
Currently if you want to test against mainnet state, you can pass in an RPC url and block number to fork from. However it isn't possible to run tests on multiple networks or fork from different block numbers.
For example, you might want 1 test to run against the state at block number 10000, and another at block number 20000.
Hardhat has a method called `hardhat_reset` which allows you to change the RPC url/block number mainnet state is forked from: https://hardhat.org/hardhat-network/guides/mainnet-forking.html#resetting-the-fork
A cheatcode similar to that would be useful. It could look like this:
```solidity
hevm.reset("https://infura.io/....", 12345);
```
### Additional context
_No response_
|
test
|
allow forking from different rpcs block numbers when testing component forge describe the feature you would like currently if you want to test against mainnet state you can pass in an rpc url and block number to fork from however it isn t possible to run tests on multiple networks or fork from different block numbers for example you might want test to run against the state at block number and another at block number hardhat has a method called hardhat reset which allows you to change the rpc url block number mainnet state is forked from a cheatcode similar to that would be useful it could look like this solidity hevm reset additional context no response
| 1
|
88,102
| 11,032,197,876
|
IssuesEvent
|
2019-12-06 19:35:34
|
OfficeDev/office-ui-fabric-react
|
https://api.github.com/repos/OfficeDev/office-ui-fabric-react
|
closed
|
[Dropdown] Narrator is not announcing "Collapsed"/"Expanded" when Dropdown is collapsed/expanded
|
Area: Accessibility Area: Narrator Component: Dropdown Resolution: By Design Resolution: External
|
<!--
Before submitting an accessibility issue please ensure the following are true:
1. Search for dupes! Please make sure the issue is not already present in our issue tracker.
2. This issue is caused by a Fabric control.
3. You can reproduce this bug in a CodePen.
4. There is documentation or best practice that supports your expected behavior (review https://www.w3.org/TR/wai-aria-1.1/ for accessibility guidance.)
PLEASE NOTE:
Do not link to, screenshot or reference a Microsoft product in this description.
Our screen reader support is limited to Edge + Narrator. Please check ARIA component examples to ensure it is not a screen reader or browser issue. Issues that do not reproduce in Edge + Narrator, and aren't caused by obvious invalid aria values, should be filed with the respective screen reading software, not the Fabric repo.
Issues that do not meet these guidelines will be closed.
-->
### Environment Information
- **Package version(s)**: office-ui-fabric-react@7.67.0
- **Browser and OS versions**: Narrator + Edge
### Describe the issue:
Narrator would not announce 'collapsed' or 'expanded' when the dropdown is collapsed or expanded.
<!-- fill this out -->
### Please provide a reproduction of the issue in a codepen:
https://codepen.io/lijinglin29/pen/gObbvpg
<!--
Providing an isolated reproduction of the issue in a codepen makes it much easier for us to help you. Here are some ways to get started:
* Go to https://aka.ms/fabricpen for a starter codepen
* You can also use the "Export to Codepen" feature for the various components in our documentation site.
* See http://codepen.io/dzearing/pens/public/?grid_type=list for a variety of examples
Alternatively, you can also use https://aka.ms/fabricdemo to get permanent repro links if the repro occurs with an example.
(A permanent link is preferable to "use the website" as the website can change.)
-->
#### Actual behavior:
Narrator does not read the State(Expand/Collapse) of the dropdown
#### Expected behavior:
Narrator should read the State(Expand/Collapse) of the dropdown
|
1.0
|
[Dropdown] Narrator is not announcing "Collapsed"/"Expanded" when Dropdown is collapsed/expanded - <!--
Before submitting an accessibility issue please ensure the following are true:
1. Search for dupes! Please make sure the issue is not already present in our issue tracker.
2. This issue is caused by a Fabric control.
3. You can reproduce this bug in a CodePen.
4. There is documentation or best practice that supports your expected behavior (review https://www.w3.org/TR/wai-aria-1.1/ for accessibility guidance.)
PLEASE NOTE:
Do not link to, screenshot or reference a Microsoft product in this description.
Our screen reader support is limited to Edge + Narrator. Please check ARIA component examples to ensure it is not a screen reader or browser issue. Issues that do not reproduce in Edge + Narrator, and aren't caused by obvious invalid aria values, should be filed with the respective screen reading software, not the Fabric repo.
Issues that do not meet these guidelines will be closed.
-->
### Environment Information
- **Package version(s)**: office-ui-fabric-react@7.67.0
- **Browser and OS versions**: Narrator + Edge
### Describe the issue:
Narrator would not announce 'collapsed' or 'expanded' when the dropdown is collapsed or expanded.
<!-- fill this out -->
### Please provide a reproduction of the issue in a codepen:
https://codepen.io/lijinglin29/pen/gObbvpg
<!--
Providing an isolated reproduction of the issue in a codepen makes it much easier for us to help you. Here are some ways to get started:
* Go to https://aka.ms/fabricpen for a starter codepen
* You can also use the "Export to Codepen" feature for the various components in our documentation site.
* See http://codepen.io/dzearing/pens/public/?grid_type=list for a variety of examples
Alternatively, you can also use https://aka.ms/fabricdemo to get permanent repro links if the repro occurs with an example.
(A permanent link is preferable to "use the website" as the website can change.)
-->
#### Actual behavior:
Narrator does not read the State(Expand/Collapse) of the dropdown
#### Expected behavior:
Narrator should read the State(Expand/Collapse) of the dropdown
|
non_test
|
narrator is not announcing collapsed expanded when dropdown is collapsed expanded before submitting an accessibility issue please ensure the following are true search for dupes please make sure the issue is not already present in our issue tracker this issue is caused by a fabric control you can reproduce this bug in a codepen there is documentation or best practice that supports your expected behavior review for accessibility guidance please note do not link to screenshot or reference a microsoft product in this description our screen reader support is limited to edge narrator please check aria component examples to ensure it is not a screen reader or browser issue issues that do not reproduce in edge narrator and aren t caused by obvious invalid aria values should be filed with the respective screen reading software not the fabric repo issues that do not meet these guidelines will be closed environment information package version s office ui fabric react browser and os versions narrator edge describe the issue narrator would not announce collapsed or expanded when the dropdown is collapsed or expanded please provide a reproduction of the issue in a codepen providing an isolated reproduction of the issue in a codepen makes it much easier for us to help you here are some ways to get started go to for a starter codepen you can also use the export to codepen feature for the various components in our documentation site see for a variety of examples alternatively you can also use to get permanent repro links if the repro occurs with an example a permanent link is preferable to use the website as the website can change actual behavior narrator does not read the state expand collapse of the dropdown expected behavior narrator should read the state expand collapse of the dropdown
| 0
|
257,739
| 22,205,755,075
|
IssuesEvent
|
2022-06-07 14:45:17
|
invisibleXML/ixml
|
https://api.github.com/repos/invisibleXML/ixml
|
closed
|
Use S12 as a catch-all for "failed to parse grammar"?
|
testsuite
|
Some tests use "none" to indicate that there's no error code for the condition they raise. Over time, we may decide that there is a code (or perhaps we'll decide that we should add a code). In the short-term, I added test "rule11.ixml" to check for failure to separate rules with at least one space or comment.
That test should raise S01. However, an implementation that simply attempts to load the grammar and discovers that it doesn't match the Invisible XML grammar may not know precisely why it failed to parse. My Earley parser doesn't, for example.
I don't want to change "S01" to "none" because the test exists to check S01 and it would be misleading to obscure that fact. (I added it because I couldn't find a test that was supposed to raise S01!). But having error codes "S01 none" is pretty clearly an abuse of "none".
I settled on "S01 S12". I thought I was going to argue that we need a general error code for "failed to parse" but I think I've come to the conclusion that that is what S12 means. Am I overlooking anything?
If not, should we change the "none" results in the test suite to S12?
|
1.0
|
Use S12 as a catch-all for "failed to parse grammar"? - Some tests use "none" to indicate that there's no error code for the condition they raise. Over time, we may decide that there is a code (or perhaps we'll decide that we should add a code). In the short-term, I added test "rule11.ixml" to check for failure to separate rules with at least one space or comment.
That test should raise S01. However, an implementation that simply attempts to load the grammar and discovers that it doesn't match the Invisible XML grammar may not know precisely why it failed to parse. My Earley parser doesn't, for example.
I don't want to change "S01" to "none" because the test exists to check S01 and it would be misleading to obscure that fact. (I added it because I couldn't find a test that was supposed to raise S01!). But having error codes "S01 none" is pretty clearly an abuse of "none".
I settled on "S01 S12". I thought I was going to argue that we need a general error code for "failed to parse" but I think I've come to the conclusion that that is what S12 means. Am I overlooking anything?
If not, should we change the "none" results in the test suite to S12?
|
test
|
use as a catch all for failed to parse grammar some tests use none to indicate that there s no error code for the condition they raise over time we may decide that there is a code or perhaps we ll decide that we should add a code in the short term i added test ixml to check for failure to separate rules with at least one space or comment that test should raise however an implementation that simply attempts to load the grammar and discovers that it doesn t match the invisible xml grammar may not know precisely why it failed to parse my earley parser doesn t for example i don t want to change to none because the test exists to check and it would be misleading to obscure that fact i added it because i couldn t find a test that was supposed to raise but having error codes none is pretty clearly an abuse of none i settled on i thought i was going to argue that we need a general error code for failed to parse but i think i ve come to the conclusion that that is what means am i overlooking anything if not should we change the none results in the test suite to
| 1
|
67,519
| 7,049,794,764
|
IssuesEvent
|
2018-01-03 00:31:35
|
vmware/vsphere-storage-for-docker
|
https://api.github.com/repos/vmware/vsphere-storage-for-docker
|
closed
|
[E2E] Automate swarm test - start service without restart_policy
|
kind/test P1 wontfix
|
[To be discussed]
Have 1 primary and 1 secondary node:
1. Run a service with 1 instance for it. Swarm will run it on the primary (say),
2. Kill that VM, swarm manager should be taken over on the secondary and the service should be restarted.
3. In the mean time the volumes mounted on the killed VM should have been detached on the primary host.
4. Verify that the services can be successfully restarted.
|
1.0
|
[E2E] Automate swarm test - start service without restart_policy - [To be discussed]
Have 1 primary and 1 secondary node:
1. Run a service with 1 instance for it. Swarm will run it on the primary (say),
2. Kill that VM, swarm manager should be taken over on the secondary and the service should be restarted.
3. In the mean time the volumes mounted on the killed VM should have been detached on the primary host.
4. Verify that the services can be successfully restarted.
|
test
|
automate swarm test start service without restart policy have primary and secondary node run a service with instance for it swarm will run it on the primary say kill that vm swarm manager should be taken over on the secondary and the service should be restarted in the mean time the volumes mounted on the killed vm should have been detached on the primary host verify that the services can be successfully restarted
| 1
|
186,180
| 15,049,986,315
|
IssuesEvent
|
2021-02-03 12:15:42
|
apache/buildstream
|
https://api.github.com/repos/apache/buildstream
|
opened
|
Document how to cleanup the bst cache: locally and in the server side
|
cache server documentation important to do
|
[See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/700)
In GitLab by [[Gitlab user @jjardon]](https://gitlab.com/jjardon) on Oct 9, 2018, 10:45
## Background
I can not dinf any documents that explain how to do this
If the only way for now is to `rm -rf` the folder, I think It still is valuable information to actually make this process clear
## Acceptance Criteria
- [ ] How to cleanup the cache locally is documented
- [ ] How to cleanup the cache in the server is documented
|
1.0
|
Document how to cleanup the bst cache: locally and in the server side - [See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/700)
In GitLab by [[Gitlab user @jjardon]](https://gitlab.com/jjardon) on Oct 9, 2018, 10:45
## Background
I can not dinf any documents that explain how to do this
If the only way for now is to `rm -rf` the folder, I think It still is valuable information to actually make this process clear
## Acceptance Criteria
- [ ] How to cleanup the cache locally is documented
- [ ] How to cleanup the cache in the server is documented
|
non_test
|
document how to cleanup the bst cache locally and in the server side in gitlab by on oct background i can not dinf any documents that explain how to do this if the only way for now is to rm rf the folder i think it still is valuable information to actually make this process clear acceptance criteria how to cleanup the cache locally is documented how to cleanup the cache in the server is documented
| 0
|
42,188
| 17,081,901,707
|
IssuesEvent
|
2021-07-08 06:50:26
|
ctripcorp/apollo
|
https://api.github.com/repos/ctripcorp/apollo
|
closed
|
apollo configure环境复制批量修改数据库后批量发布问题
|
area/configservice area/operations area/portal kind/question stale
|
背景: 想复制一套新环境,涉及到多个应用,多个配置
操作: configure数据库复制了一份,然后修改了item表的mysql、redis等的配置。然后发现portal界面里面需要点击下发布,有什么方法能批量发布下哈或者什么更好的方法实现复制
说明:
1、看到open api可以实现接口发布,但是发现token得一个一个赋权,有点麻烦
2、直接修改release里面的内容,但是因为release太多10w+记录,要替换的地方又比较多,这个操作起来也有点麻烦
帮忙看看有什么好的办法哈
|
1.0
|
apollo configure环境复制批量修改数据库后批量发布问题 - 背景: 想复制一套新环境,涉及到多个应用,多个配置
操作: configure数据库复制了一份,然后修改了item表的mysql、redis等的配置。然后发现portal界面里面需要点击下发布,有什么方法能批量发布下哈或者什么更好的方法实现复制
说明:
1、看到open api可以实现接口发布,但是发现token得一个一个赋权,有点麻烦
2、直接修改release里面的内容,但是因为release太多10w+记录,要替换的地方又比较多,这个操作起来也有点麻烦
帮忙看看有什么好的办法哈
|
non_test
|
apollo configure环境复制批量修改数据库后批量发布问题 背景: 想复制一套新环境,涉及到多个应用,多个配置 操作: configure数据库复制了一份,然后修改了item表的mysql、redis等的配置。然后发现portal界面里面需要点击下发布,有什么方法能批量发布下哈或者什么更好的方法实现复制 说明: 、看到open api可以实现接口发布,但是发现token得一个一个赋权,有点麻烦 、直接修改release里面的内容, 记录,要替换的地方又比较多,这个操作起来也有点麻烦 帮忙看看有什么好的办法哈
| 0
|
247,105
| 20,957,109,367
|
IssuesEvent
|
2022-03-27 08:40:17
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: hibernate-spatial failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
|
roachtest.hibernate-spatial [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4690856&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4690856&tab=artifacts#/hibernate-spatial) on release-22.1 @ [1226d6f5d25b6bd41264da64fe7e7ec0db1848f2](https://github.com/cockroachdb/cockroach/commits/1226d6f5d25b6bd41264da64fe7e7ec0db1848f2):
```
|
| > Configure project :hibernate-micrometer
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-osgi
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
| [WARN] Skipping all tests for hibernate-osgi due to Karaf/Pax-Exam issues with latest JDK 11
|
| > Configure project :hibernate-proxool
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-spatial
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-testing
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-vibur
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :documentation
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-ehcache
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-entitymanager
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-infinispan
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
|
| Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
| Use '--warning-mode all' to show the individual deprecation warnings.
| See https://docs.gradle.org/6.7/userguide/command_line_interface.html#sec:command_line_warnings
Wraps: (6) COMMAND_PROBLEM
Wraps: (7) Node 1. Command with error:
| ``````
| cd /mnt/data1/hibernate/hibernate-spatial/ && ./../gradlew test -Pdb=cockroachdb_spatial --tests org.hibernate.spatial.dialect.postgis.*
| ``````
Wraps: (8) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *cluster.WithCommandDetails (6) errors.Cmd (7) *hintdetail.withDetail (8) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*hibernate-spatial.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: hibernate-spatial failed - roachtest.hibernate-spatial [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4690856&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4690856&tab=artifacts#/hibernate-spatial) on release-22.1 @ [1226d6f5d25b6bd41264da64fe7e7ec0db1848f2](https://github.com/cockroachdb/cockroach/commits/1226d6f5d25b6bd41264da64fe7e7ec0db1848f2):
```
|
| > Configure project :hibernate-micrometer
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-osgi
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
| [WARN] Skipping all tests for hibernate-osgi due to Karaf/Pax-Exam issues with latest JDK 11
|
| > Configure project :hibernate-proxool
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-spatial
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-testing
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-vibur
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :documentation
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-ehcache
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-entitymanager
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
| Forcing Javadoc in Java 8 compatible mode
|
| > Configure project :hibernate-infinispan
| Maven settings.xml file did not exist : /home/ubuntu/.m2/settings.xml
|
| Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
| Use '--warning-mode all' to show the individual deprecation warnings.
| See https://docs.gradle.org/6.7/userguide/command_line_interface.html#sec:command_line_warnings
Wraps: (6) COMMAND_PROBLEM
Wraps: (7) Node 1. Command with error:
| ``````
| cd /mnt/data1/hibernate/hibernate-spatial/ && ./../gradlew test -Pdb=cockroachdb_spatial --tests org.hibernate.spatial.dialect.postgis.*
| ``````
Wraps: (8) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *cluster.WithCommandDetails (6) errors.Cmd (7) *hintdetail.withDetail (8) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*hibernate-spatial.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest hibernate spatial failed roachtest hibernate spatial with on release configure project hibernate micrometer maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate osgi maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode skipping all tests for hibernate osgi due to karaf pax exam issues with latest jdk configure project hibernate proxool maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate spatial maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate testing maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate vibur maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project documentation forcing javadoc in java compatible mode configure project hibernate ehcache maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate entitymanager maven settings xml file did not exist home ubuntu settings xml forcing javadoc in java compatible mode configure project hibernate infinispan maven settings xml file did not exist home ubuntu settings xml deprecated gradle features were used in this build making it incompatible with gradle use warning mode all to show the individual deprecation warnings see wraps command problem wraps node command with error cd mnt hibernate hibernate spatial gradlew test pdb cockroachdb spatial tests org hibernate spatial dialect postgis wraps exit status error types withstack withstack errutil withprefix withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror help see see cc cockroachdb sql experience
| 1
|
115,215
| 11,868,597,399
|
IssuesEvent
|
2020-03-26 09:29:02
|
reapit/foundations
|
https://api.github.com/repos/reapit/foundations
|
closed
|
Update config manager and environment and config document
|
cloud-team documentation
|
Context: Because we've just migrate from reapit-config.json to config.json. We need to update document to keep it up to date.
Output:
Update config manager and environment and config document
|
1.0
|
Update config manager and environment and config document - Context: Because we've just migrate from reapit-config.json to config.json. We need to update document to keep it up to date.
Output:
Update config manager and environment and config document
|
non_test
|
update config manager and environment and config document context because we ve just migrate from reapit config json to config json we need to update document to keep it up to date output update config manager and environment and config document
| 0
|
275,036
| 23,890,533,773
|
IssuesEvent
|
2022-09-08 11:06:38
|
stores-cedcommerce/HSL-Home-page-design
|
https://api.github.com/repos/stores-cedcommerce/HSL-Home-page-design
|
closed
|
The footer section, the title of Refund
|
Footer section Desktop Content Type / typo Ready to test fixed
|
**Actual result:**


**Expected result:**
The title needs to be Returns & Refund policy, we have a content for the return in the text content.
2: the url needed to be updated for the Returns & Refund Policy.
3: The title needs to be update for it.
|
1.0
|
The footer section, the title of Refund - **Actual result:**


**Expected result:**
The title needs to be Returns & Refund policy, we have a content for the return in the text content.
2: the url needed to be updated for the Returns & Refund Policy.
3: The title needs to be update for it.
|
test
|
the footer section the title of refund actual result expected result the title needs to be returns refund policy we have a content for the return in the text content the url needed to be updated for the returns refund policy the title needs to be update for it
| 1
|
273,016
| 23,722,003,340
|
IssuesEvent
|
2022-08-30 16:04:22
|
stakwork/sphinx-relay
|
https://api.github.com/repos/stakwork/sphinx-relay
|
opened
|
Intergration tests stalling
|
Automated Testing
|
Sometimes the integration tests stalls if the docker env doesn't startup properly
We can solve this two ways
1). fail fast so rather than stalling notice that the env didn't start up correctly and just fail
2). restart the test if the env didn't startup properly
we could do 1 first and then do 2
|
1.0
|
Intergration tests stalling - Sometimes the integration tests stalls if the docker env doesn't startup properly
We can solve this two ways
1). fail fast so rather than stalling notice that the env didn't start up correctly and just fail
2). restart the test if the env didn't startup properly
we could do 1 first and then do 2
|
test
|
intergration tests stalling sometimes the integration tests stalls if the docker env doesn t startup properly we can solve this two ways fail fast so rather than stalling notice that the env didn t start up correctly and just fail restart the test if the env didn t startup properly we could do first and then do
| 1
|
61,950
| 6,762,427,895
|
IssuesEvent
|
2017-10-25 07:51:42
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
reopened
|
Coverity issue seen with CID:173657
|
area: Tests bug priority: medium
|
In File: /tests/net/icmpv6/src/main.c
Category: Null pointer dereferences
Function: run_tests
Component: Tests
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
|
1.0
|
Coverity issue seen with CID:173657 - In File: /tests/net/icmpv6/src/main.c
Category: Null pointer dereferences
Function: run_tests
Component: Tests
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
|
test
|
coverity issue seen with cid in file tests net src main c category null pointer dereferences function run tests component tests please fix or provide comments to square it off in coverity in the link
| 1
|
668,922
| 22,604,029,590
|
IssuesEvent
|
2022-06-29 11:44:02
|
SAP/xsk
|
https://api.github.com/repos/SAP/xsk
|
closed
|
[Runtime tests] Test hdbdd and hdbti
|
CI/CD priority-high effort-high supportability
|
### Target
1. Extend the following sample https://github.com/SAP/xsk/tree/main/samples/hdb-hdbti-simple by adding
1.1. xsjs - GET which selects the data from the database and returns it in a JSON response
1.2. xsodata service - expose the entity via XSOData
2. Use the framework prepared in #1406 to deploy the sample in the pre-defined XSK environment
3. Execute requests and validate the response:
3.1. Calling the xsjs service must return a JSON array in which the objects need to match the ones in the hdbti
3.2. Calling the xsodata service must return an XML response with the objects matching the ones in the hdbti
4. Drop the schema after the test - this would require credentials to the HANA to be passed in the env as well
|
1.0
|
[Runtime tests] Test hdbdd and hdbti - ### Target
1. Extend the following sample https://github.com/SAP/xsk/tree/main/samples/hdb-hdbti-simple by adding
1.1. xsjs - GET which selects the data from the database and returns it in a JSON response
1.2. xsodata service - expose the entity via XSOData
2. Use the framework prepared in #1406 to deploy the sample in the pre-defined XSK environment
3. Execute requests and validate the response:
3.1. Calling the xsjs service must return a JSON array in which the objects need to match the ones in the hdbti
3.2. Calling the xsodata service must return an XML response with the objects matching the ones in the hdbti
4. Drop the schema after the test - this would require credentials to the HANA to be passed in the env as well
|
non_test
|
test hdbdd and hdbti target extend the following sample by adding xsjs get which selects the data from the database and returns it in a json response xsodata service expose the entity via xsodata use the framework prepared in to deploy the sample in the pre defined xsk environment execute requests and validate the response calling the xsjs service must return a json array in which the objects need to match the ones in the hdbti calling the xsodata service must return an xml response with the objects matching the ones in the hdbti drop the schema after the test this would require credentials to the hana to be passed in the env as well
| 0
|
190,821
| 14,579,434,164
|
IssuesEvent
|
2020-12-18 07:17:35
|
whileTrue0x90/challenge-bot-test
|
https://api.github.com/repos/whileTrue0x90/challenge-bot-test
|
closed
|
zxc
|
challenge-program picked sig/test
|
## Description
issue background
## Tasks
task 1
task 2
task 3
## Score
100
## Mentor
whileTrue0x90
## Recommended Skills
skill 1
skill 2
skill 3
## Learning Materials(optional)
Learning
Materials
|
1.0
|
zxc - ## Description
issue background
## Tasks
task 1
task 2
task 3
## Score
100
## Mentor
whileTrue0x90
## Recommended Skills
skill 1
skill 2
skill 3
## Learning Materials(optional)
Learning
Materials
|
test
|
zxc description issue background tasks task task task score mentor recommended skills skill skill skill learning materials optional learning materials
| 1
|
71,014
| 7,225,749,807
|
IssuesEvent
|
2018-02-10 00:43:39
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
prow/istio-pilot-e2e.sh — Job failed.
|
area/test and release ci/prow kind/test-failure
|
<!--
Please see https://istio.io/help and if you are a user of Istio, please file issues in
https://github.com/istio/issues/issues instead of here.
Only confirmed, triaged and labelled issues should be filed here.
Please add the correct labels and epics (and priority and milestones if you have that information)
-->
https://storage.googleapis.com/istio-prow/pull/istio_istio/3169/istio-pilot-e2e/2541/build-log.txt
```
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) zipkin run 0 failed all 90 attempts for Ensure traces are picked up by Zipkin
I0207 17:48:50.174] * (auth infra) kubernetes-external-name-services run 0 failed all 90 attempts for HTTP connection from b to externalbin.istio-test-app-g6n8n
I0207 17:48:50.174]
I0207 17:48:50.174]
I0207 17:48:50.174] tests/istio.mk:80: recipe for target 'e2e_pilot' failed
W0207 17:48:50.275] Error from server (NotFound): error when deleting "STDIN": services "istio-sidecar-injector" not found
W0207 17:48:50.275] Error from server (NotFound): error when deleting "STDIN": serviceaccounts "istio-sidecar-injector-service-account" not found
W0207 17:48:50.275] Error from server (NotFound): error when stopping "STDIN": deployments.extensions "istio-sidecar-injector" not found
W0207 17:48:50.275] make: *** [e2e_pilot] Error 255
```
|
2.0
|
prow/istio-pilot-e2e.sh — Job failed. - <!--
Please see https://istio.io/help and if you are a user of Istio, please file issues in
https://github.com/istio/issues/issues instead of here.
Only confirmed, triaged and labelled issues should be filed here.
Please add the correct labels and epics (and priority and milestones if you have that information)
-->
https://storage.googleapis.com/istio-prow/pull/istio_istio/3169/istio-pilot-e2e/2541/build-log.txt
```
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.171] * (auth infra) routing-rules-to-egress run 0 redirect traffic from httpbin.org/post to *.httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from *.httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.172] * (auth infra) routing-rules-to-egress run 0 redirect traffic from nghttp2.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [404], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.173] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from httpbin.org/post to *.httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 0 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 1 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) routing-rules-to-egress run 0 rewrite traffic from *.httpbin.org/post to httpbin.org/get attempt 2 redirect verification failed: response status code: [405], expected 200
I0207 17:48:50.174] * (auth infra) zipkin run 0 failed all 90 attempts for Ensure traces are picked up by Zipkin
I0207 17:48:50.174] * (auth infra) kubernetes-external-name-services run 0 failed all 90 attempts for HTTP connection from b to externalbin.istio-test-app-g6n8n
I0207 17:48:50.174]
I0207 17:48:50.174]
I0207 17:48:50.174] tests/istio.mk:80: recipe for target 'e2e_pilot' failed
W0207 17:48:50.275] Error from server (NotFound): error when deleting "STDIN": services "istio-sidecar-injector" not found
W0207 17:48:50.275] Error from server (NotFound): error when deleting "STDIN": serviceaccounts "istio-sidecar-injector-service-account" not found
W0207 17:48:50.275] Error from server (NotFound): error when stopping "STDIN": deployments.extensions "istio-sidecar-injector" not found
W0207 17:48:50.275] make: *** [e2e_pilot] Error 255
```
|
test
|
prow istio pilot sh — job failed please see and if you are a user of istio please file issues in instead of here only confirmed triaged and labelled issues should be filed here please add the correct labels and epics and priority and milestones if you have that information auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run redirect traffic from org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra routing rules to egress run rewrite traffic from httpbin org post to httpbin org get attempt redirect verification failed response status code expected auth infra zipkin run failed all attempts for ensure traces are picked up by zipkin auth infra kubernetes external name services run failed all attempts for http connection from b to externalbin istio test app tests istio mk recipe for target pilot failed error from server notfound error when deleting stdin services istio sidecar injector not found error from server notfound error when deleting stdin serviceaccounts istio sidecar injector service account not found error from server notfound error when stopping stdin deployments extensions istio sidecar injector not found make error
| 1
|
36,220
| 7,868,512,382
|
IssuesEvent
|
2018-06-23 23:02:02
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Rename csr_matrix_class_3array.F90 to csr_matrix_class.F90 (Trac #712)
|
Migrated from Trac clubb_src defect schemena@uwm.edu
|
I noticed while working on https://github.com/larson-group/clubb/issues/680 that csr_matrix_class_3array.F90 does not follow our coding standards. The module is named differently than the file. I will rename the file to csr_matrix_class.F90. However, because our naming convention tends to be "files that end with '_module'", should this be further renamed from csr_matrix_class to csr_matrix_module? Is there an even better name perhaps?
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/712
```json
{
"status": "closed",
"changetime": "2014-07-08T15:04:58",
"description": "I noticed while working on clubb:ticket:680 that csr_matrix_class_3array.F90 does not follow our coding standards. The module is named differently than the file. I will rename the file to csr_matrix_class.F90. However, because our naming convention tends to be \"files that end with '_module'\", should this be further renamed from csr_matrix_class to csr_matrix_module? Is there an even better name perhaps?",
"reporter": "schemena@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1404831898149981",
"component": "clubb_src",
"summary": "Rename csr_matrix_class_3array.F90 to csr_matrix_class.F90",
"priority": "minor",
"keywords": "",
"time": "2014-07-03T20:19:04",
"milestone": "",
"owner": "schemena@uwm.edu",
"type": "defect"
}
```
|
1.0
|
Rename csr_matrix_class_3array.F90 to csr_matrix_class.F90 (Trac #712) - I noticed while working on https://github.com/larson-group/clubb/issues/680 that csr_matrix_class_3array.F90 does not follow our coding standards. The module is named differently than the file. I will rename the file to csr_matrix_class.F90. However, because our naming convention tends to be "files that end with '_module'", should this be further renamed from csr_matrix_class to csr_matrix_module? Is there an even better name perhaps?
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/712
```json
{
"status": "closed",
"changetime": "2014-07-08T15:04:58",
"description": "I noticed while working on clubb:ticket:680 that csr_matrix_class_3array.F90 does not follow our coding standards. The module is named differently than the file. I will rename the file to csr_matrix_class.F90. However, because our naming convention tends to be \"files that end with '_module'\", should this be further renamed from csr_matrix_class to csr_matrix_module? Is there an even better name perhaps?",
"reporter": "schemena@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1404831898149981",
"component": "clubb_src",
"summary": "Rename csr_matrix_class_3array.F90 to csr_matrix_class.F90",
"priority": "minor",
"keywords": "",
"time": "2014-07-03T20:19:04",
"milestone": "",
"owner": "schemena@uwm.edu",
"type": "defect"
}
```
|
non_test
|
rename csr matrix class to csr matrix class trac i noticed while working on that csr matrix class does not follow our coding standards the module is named differently than the file i will rename the file to csr matrix class however because our naming convention tends to be files that end with module should this be further renamed from csr matrix class to csr matrix module is there an even better name perhaps attachments migrated from json status closed changetime description i noticed while working on clubb ticket that csr matrix class does not follow our coding standards the module is named differently than the file i will rename the file to csr matrix class however because our naming convention tends to be files that end with module should this be further renamed from csr matrix class to csr matrix module is there an even better name perhaps reporter schemena uwm edu cc vlarson uwm edu resolution fixed ts component clubb src summary rename csr matrix class to csr matrix class priority minor keywords time milestone owner schemena uwm edu type defect
| 0
|
241,093
| 26,256,634,028
|
IssuesEvent
|
2023-01-06 01:43:31
|
Kijacode/dotfiles
|
https://api.github.com/repos/Kijacode/dotfiles
|
closed
|
CVE-2021-35065 (High) detected in glob-parent-5.1.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.2.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz</a></p>
<p>Path to dependency file: /.vscode/extensions/hanwang.android-adb-wlan-0.0.5/package.json</p>
<p>Path to vulnerable library: /.vscode/extensions/hanwang.android-adb-wlan-0.0.5/node_modules/glob-parent/package.json,/.vscode/extensions/orta.vscode-jest-4.0.3/node_modules/glob-parent/package.json,/.vscode/extensions/mtxr.sqltools-driver-sqlite-0.2.0/node_modules/glob-parent/package.json,/.vscode/extensions/alexcvzz.vscode-sqlite-0.13.0/node_modules/glob-parent/package.json,/.vscode/extensions/ajhyndman.jslint-1.2.1/node_modules/glob-parent/package.json,/.vscode/extensions/equinusocio.vsc-material-theme-33.1.2/node_modules/glob-parent/package.json,/.vscode/extensions/benawad.stories-2.22.0/node_modules/glob-parent/package.json,/.vscode/extensions/dbaeumer.vscode-eslint-2.1.23/node_modules/glob-parent/package.json,/.vscode/extensions/esbenp.prettier-vscode-8.1.0/node_modules/glob-parent/package.json,/.vscode/extensions/rvest.vs-code-prettier-eslint-3.0.4/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-maven-0.32.1/node_modules/glob-parent/package.json,/.vscode/extensions/uber.baseweb-9.108.0/node_modules/watchpack/node_modules/glob-parent/package.json,/.vscode/extensions/ms-azuretools.vscode-docker-1.15.0/node_modules/glob-parent/package.json,/.vscode/extensions/alexiv.vscode-angular2-files-1.6.4/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-test-0.30.1/node_modules/glob-parent/package.json,/.vscode/extensions/eamodio.gitlens-11.6.0/node_modules/glob-parent/package.json,/.vscode/extensions/dart-code.dart-code-3.25.1/node_modules/glob-parent/package.json,/.vscode/extensions/fabiospampinato.vscode-todo-plus-4.18.3/node_modules/chokidar/node_modules/glob-parent/package.json,/.vscode/extensions/maurodesouza.vscode-simple-readme-1.0.1/node_modules/glob-parent/package.json,/.vscode/extensions/foxundermoon.shell-format-7.1.0/node_modules/glob-parent/package.json,/.vscode/extensions/ihsanis.scrcpy-0.0.8/node_modules/glob-parent/package.json,/.vscode/extensions/pkief.material-icon-theme-4.8.0/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-dependency-0.18.6/node_modules/glob-parent/package.json,/.vscode/extensions/angulardoc.angulardoc-vscode-6.1.3/node_modules/watchpack/node_modules/glob-parent/package.json,/.vscode/extensions/mhutchie.git-graph-1.30.0/node_modules/glob-parent/package.json,/.vscode/extensions/wallabyjs.quokka-vscode-1.0.386/node_modules/glob-parent/package.json,/.vscode/extensions/kim-sardine.statusbar-quotes-21.6.13/node_modules/glob-parent/package.json,/.vscode/extensions/github.vscode-pull-request-github-0.28.0/node_modules/glob-parent/package.json,/.vscode/extensions/arcadable.arcadable-emulator-2.0.0/node_modules/glob-parent/package.json,/.vscode/extensions/bmalehorn.shell-syntax-1.0.2/node_modules/glob-parent/package.json,/.vscode/extensions/chadonsom.auto-view-readme-0.0.6/node_modules/glob-parent/package.json,/.vscode/extensions/msjsdiag.debugger-for-chrome-4.12.12/node_modules/mocha/node_modules/glob-parent/package.json,/.vscode/extensions/ghmcadams.lintlens-3.0.0/node_modules/glob-parent/package.json,/.vscode/extensions/wallabyjs.quokka-vscode-1.0.386/dist/wallaby/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-pack-0.18.2/node_modules/glob-parent/package.json,/.vscode/extensions/firsttris.vscode-jest-runner-0.4.44/node_modules/glob-parent/package.json,/.vscode/extensions/diemasmichiels.emulate-1.4.0/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.30.0.tgz (Root Library)
- :x: **glob-parent-5.1.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent from 6.0.0 and before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: glob-parent - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-35065 (High) detected in glob-parent-5.1.2.tgz - autoclosed - ## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.2.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz</a></p>
<p>Path to dependency file: /.vscode/extensions/hanwang.android-adb-wlan-0.0.5/package.json</p>
<p>Path to vulnerable library: /.vscode/extensions/hanwang.android-adb-wlan-0.0.5/node_modules/glob-parent/package.json,/.vscode/extensions/orta.vscode-jest-4.0.3/node_modules/glob-parent/package.json,/.vscode/extensions/mtxr.sqltools-driver-sqlite-0.2.0/node_modules/glob-parent/package.json,/.vscode/extensions/alexcvzz.vscode-sqlite-0.13.0/node_modules/glob-parent/package.json,/.vscode/extensions/ajhyndman.jslint-1.2.1/node_modules/glob-parent/package.json,/.vscode/extensions/equinusocio.vsc-material-theme-33.1.2/node_modules/glob-parent/package.json,/.vscode/extensions/benawad.stories-2.22.0/node_modules/glob-parent/package.json,/.vscode/extensions/dbaeumer.vscode-eslint-2.1.23/node_modules/glob-parent/package.json,/.vscode/extensions/esbenp.prettier-vscode-8.1.0/node_modules/glob-parent/package.json,/.vscode/extensions/rvest.vs-code-prettier-eslint-3.0.4/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-maven-0.32.1/node_modules/glob-parent/package.json,/.vscode/extensions/uber.baseweb-9.108.0/node_modules/watchpack/node_modules/glob-parent/package.json,/.vscode/extensions/ms-azuretools.vscode-docker-1.15.0/node_modules/glob-parent/package.json,/.vscode/extensions/alexiv.vscode-angular2-files-1.6.4/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-test-0.30.1/node_modules/glob-parent/package.json,/.vscode/extensions/eamodio.gitlens-11.6.0/node_modules/glob-parent/package.json,/.vscode/extensions/dart-code.dart-code-3.25.1/node_modules/glob-parent/package.json,/.vscode/extensions/fabiospampinato.vscode-todo-plus-4.18.3/node_modules/chokidar/node_modules/glob-parent/package.json,/.vscode/extensions/maurodesouza.vscode-simple-readme-1.0.1/node_modules/glob-parent/package.json,/.vscode/extensions/foxundermoon.shell-format-7.1.0/node_modules/glob-parent/package.json,/.vscode/extensions/ihsanis.scrcpy-0.0.8/node_modules/glob-parent/package.json,/.vscode/extensions/pkief.material-icon-theme-4.8.0/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-dependency-0.18.6/node_modules/glob-parent/package.json,/.vscode/extensions/angulardoc.angulardoc-vscode-6.1.3/node_modules/watchpack/node_modules/glob-parent/package.json,/.vscode/extensions/mhutchie.git-graph-1.30.0/node_modules/glob-parent/package.json,/.vscode/extensions/wallabyjs.quokka-vscode-1.0.386/node_modules/glob-parent/package.json,/.vscode/extensions/kim-sardine.statusbar-quotes-21.6.13/node_modules/glob-parent/package.json,/.vscode/extensions/github.vscode-pull-request-github-0.28.0/node_modules/glob-parent/package.json,/.vscode/extensions/arcadable.arcadable-emulator-2.0.0/node_modules/glob-parent/package.json,/.vscode/extensions/bmalehorn.shell-syntax-1.0.2/node_modules/glob-parent/package.json,/.vscode/extensions/chadonsom.auto-view-readme-0.0.6/node_modules/glob-parent/package.json,/.vscode/extensions/msjsdiag.debugger-for-chrome-4.12.12/node_modules/mocha/node_modules/glob-parent/package.json,/.vscode/extensions/ghmcadams.lintlens-3.0.0/node_modules/glob-parent/package.json,/.vscode/extensions/wallabyjs.quokka-vscode-1.0.386/dist/wallaby/node_modules/glob-parent/package.json,/.vscode/extensions/vscjava.vscode-java-pack-0.18.2/node_modules/glob-parent/package.json,/.vscode/extensions/firsttris.vscode-jest-runner-0.4.44/node_modules/glob-parent/package.json,/.vscode/extensions/diemasmichiels.emulate-1.4.0/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.30.0.tgz (Root Library)
- :x: **glob-parent-5.1.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent from 6.0.0 and before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: glob-parent - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in glob parent tgz autoclosed cve high severity vulnerability vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file vscode extensions hanwang android adb wlan package json path to vulnerable library vscode extensions hanwang android adb wlan node modules glob parent package json vscode extensions orta vscode jest node modules glob parent package json vscode extensions mtxr sqltools driver sqlite node modules glob parent package json vscode extensions alexcvzz vscode sqlite node modules glob parent package json vscode extensions ajhyndman jslint node modules glob parent package json vscode extensions equinusocio vsc material theme node modules glob parent package json vscode extensions benawad stories node modules glob parent package json vscode extensions dbaeumer vscode eslint node modules glob parent package json vscode extensions esbenp prettier vscode node modules glob parent package json vscode extensions rvest vs code prettier eslint node modules glob parent package json vscode extensions vscjava vscode maven node modules glob parent package json vscode extensions uber baseweb node modules watchpack node modules glob parent package json vscode extensions ms azuretools vscode docker node modules glob parent package json vscode extensions alexiv vscode files node modules glob parent package json vscode extensions vscjava vscode java test node modules glob parent package json vscode extensions eamodio gitlens node modules glob parent package json vscode extensions dart code dart code node modules glob parent package json vscode extensions fabiospampinato vscode todo plus node modules chokidar node modules glob parent package json vscode extensions maurodesouza vscode simple readme node modules glob parent package json vscode extensions foxundermoon shell format node modules glob parent package json vscode extensions ihsanis scrcpy node modules glob parent package json vscode extensions pkief material icon theme node modules glob parent package json vscode extensions vscjava vscode java dependency node modules glob parent package json vscode extensions angulardoc angulardoc vscode node modules watchpack node modules glob parent package json vscode extensions mhutchie git graph node modules glob parent package json vscode extensions wallabyjs quokka vscode node modules glob parent package json vscode extensions kim sardine statusbar quotes node modules glob parent package json vscode extensions github vscode pull request github node modules glob parent package json vscode extensions arcadable arcadable emulator node modules glob parent package json vscode extensions bmalehorn shell syntax node modules glob parent package json vscode extensions chadonsom auto view readme node modules glob parent package json vscode extensions msjsdiag debugger for chrome node modules mocha node modules glob parent package json vscode extensions ghmcadams lintlens node modules glob parent package json vscode extensions wallabyjs quokka vscode dist wallaby node modules glob parent package json vscode extensions vscjava vscode java pack node modules glob parent package json vscode extensions firsttris vscode jest runner node modules glob parent package json vscode extensions diemasmichiels emulate node modules glob parent package json dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in base branch main vulnerability details the package glob parent from and before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with mend
| 0
|
23,779
| 3,851,867,778
|
IssuesEvent
|
2016-04-06 05:28:58
|
GPF/imame4all
|
https://api.github.com/repos/GPF/imame4all
|
closed
|
no available games found
|
auto-migrated Priority-Medium Type-Defect
|
```
I added some rom in zip files to the /sdcard/ROMs/MAME4All/roms
I get the same message : "no available games found"
I have SE i10 - Xperia Mini
```
Original issue reported on code.google.com by `infodbsy...@gmail.com` on 22 Oct 2011 at 3:36
|
1.0
|
no available games found - ```
I added some rom in zip files to the /sdcard/ROMs/MAME4All/roms
I get the same message : "no available games found"
I have SE i10 - Xperia Mini
```
Original issue reported on code.google.com by `infodbsy...@gmail.com` on 22 Oct 2011 at 3:36
|
non_test
|
no available games found i added some rom in zip files to the sdcard roms roms i get the same message no available games found i have se xperia mini original issue reported on code google com by infodbsy gmail com on oct at
| 0
|
35,445
| 4,982,585,273
|
IssuesEvent
|
2016-12-07 11:52:35
|
Kademi/kademi-dev
|
https://api.github.com/repos/Kademi/kademi-dev
|
closed
|
Import and remove user does not work correctly
|
bug Ready to Test - Dev
|
1: Import hundreds or thousands of user, I always must re-index to get the correct number of profiles.
2: Remove hundreds of user but on manage user page, total number of enable profile would reduce by only 100 proflies.
|
1.0
|
Import and remove user does not work correctly - 1: Import hundreds or thousands of user, I always must re-index to get the correct number of profiles.
2: Remove hundreds of user but on manage user page, total number of enable profile would reduce by only 100 proflies.
|
test
|
import and remove user does not work correctly import hundreds or thousands of user i always must re index to get the correct number of profiles remove hundreds of user but on manage user page total number of enable profile would reduce by only proflies
| 1
|
324,497
| 27,809,085,930
|
IssuesEvent
|
2023-03-18 00:09:41
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
opened
|
Fix elementwise.test_acos
|
Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_elementwise.py::test_acos[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-17T19:06:46.4164740Z E hypothesis.errors.InvalidArgument: Cannot sample from <hypothesis.strategies._internal.core.CompositeStrategy object at 0x7f1e78514d60>, not an ordered collection. Hypothesis goes to some length to ensure that the sampled_from strategy has stable results between runs. To replay a saved example, the sampled values must have the same iteration order on every run - ruling out sets, dicts, etc due to hash randomization. Most cases can simply use `sorted(values)`, but mixed types or special values such as math.nan require careful handling - and note that when simplifying an example, Hypothesis treats earlier values as simpler.
2023-03-17T19:06:46.4167953Z E hypothesis.errors.Flaky: Inconsistent data generation! Data generation behaved differently between different runs. Is your data generation depending on external state?
</details>
|
1.0
|
Fix elementwise.test_acos - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4450550616/jobs/7816143868" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_elementwise.py::test_acos[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-17T19:06:46.4164740Z E hypothesis.errors.InvalidArgument: Cannot sample from <hypothesis.strategies._internal.core.CompositeStrategy object at 0x7f1e78514d60>, not an ordered collection. Hypothesis goes to some length to ensure that the sampled_from strategy has stable results between runs. To replay a saved example, the sampled values must have the same iteration order on every run - ruling out sets, dicts, etc due to hash randomization. Most cases can simply use `sorted(values)`, but mixed types or special values such as math.nan require careful handling - and note that when simplifying an example, Hypothesis treats earlier values as simpler.
2023-03-17T19:06:46.4167953Z E hypothesis.errors.Flaky: Inconsistent data generation! Data generation behaved differently between different runs. Is your data generation depending on external state?
</details>
|
test
|
fix elementwise test acos tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test functional test core test elementwise py test acos e hypothesis errors invalidargument cannot sample from not an ordered collection hypothesis goes to some length to ensure that the sampled from strategy has stable results between runs to replay a saved example the sampled values must have the same iteration order on every run ruling out sets dicts etc due to hash randomization most cases can simply use sorted values but mixed types or special values such as math nan require careful handling and note that when simplifying an example hypothesis treats earlier values as simpler e hypothesis errors flaky inconsistent data generation data generation behaved differently between different runs is your data generation depending on external state
| 1
|
331,558
| 29,042,305,328
|
IssuesEvent
|
2023-05-13 05:06:29
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
[project-monitoring] Helm Chart crashs with disabled AlertManager
|
internal [zube]: To Test team/area3
|
**Describe the bug**
The Helm Chart has the [option](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring/0.2.1/values.yaml#L139) to disable the AlertManager. If this value is set (`enabled: false`) Helm crashs due the broken yaml file for alertmanager.yaml
<!--A clear and concise description of what the bug is.-->
**To Reproduce**
disable AlertManager
<!--Steps to reproduce the behavior-->
**Result**
Helm crashs
**Expected Result**
AlertManager is completely disabled and will not installed
<!--A clear and concise description of what you expected to happen.-->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem.-->
**Additional context**
In the deeper look and why this is not happen in the [rancher-monitoring Chart](https://github.com/rancher/charts/blob/release-v2.7/charts/rancher-monitoring/101.0.0%2Bup19.0.3/templates/alertmanager/alertmanager.yaml):
This [patch file](https://github.com/rancher/prometheus-federator/blob/main/packages/rancher-project-monitoring/generated-changes/patch/templates/alertmanager/alertmanager.yaml.patch) sets the `end`tag of the first condition in line 73 instead at the end of the file. As a result you become a broken alertmanager.yaml which can't parse by Helm.
Just wondering why this patch logic is a good idea because it's complicated to debug and caused easy to an error.
|
1.0
|
[project-monitoring] Helm Chart crashs with disabled AlertManager - **Describe the bug**
The Helm Chart has the [option](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring/0.2.1/values.yaml#L139) to disable the AlertManager. If this value is set (`enabled: false`) Helm crashs due the broken yaml file for alertmanager.yaml
<!--A clear and concise description of what the bug is.-->
**To Reproduce**
disable AlertManager
<!--Steps to reproduce the behavior-->
**Result**
Helm crashs
**Expected Result**
AlertManager is completely disabled and will not installed
<!--A clear and concise description of what you expected to happen.-->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem.-->
**Additional context**
In the deeper look and why this is not happen in the [rancher-monitoring Chart](https://github.com/rancher/charts/blob/release-v2.7/charts/rancher-monitoring/101.0.0%2Bup19.0.3/templates/alertmanager/alertmanager.yaml):
This [patch file](https://github.com/rancher/prometheus-federator/blob/main/packages/rancher-project-monitoring/generated-changes/patch/templates/alertmanager/alertmanager.yaml.patch) sets the `end`tag of the first condition in line 73 instead at the end of the file. As a result you become a broken alertmanager.yaml which can't parse by Helm.
Just wondering why this patch logic is a good idea because it's complicated to debug and caused easy to an error.
|
test
|
helm chart crashs with disabled alertmanager describe the bug the helm chart has the to disable the alertmanager if this value is set enabled false helm crashs due the broken yaml file for alertmanager yaml to reproduce disable alertmanager result helm crashs expected result alertmanager is completely disabled and will not installed screenshots additional context in the deeper look and why this is not happen in the this sets the end tag of the first condition in line instead at the end of the file as a result you become a broken alertmanager yaml which can t parse by helm just wondering why this patch logic is a good idea because it s complicated to debug and caused easy to an error
| 1
|
533,360
| 15,589,527,875
|
IssuesEvent
|
2021-03-18 08:10:35
|
Devops-ohtuprojekti/DevOpsCSAOS
|
https://api.github.com/repos/Devops-ohtuprojekti/DevOpsCSAOS
|
closed
|
Clean HTML
|
Priority 2 enhancement project wide
|
- semantics: ensure proper tags are used
- use accessibility (a11y) tools to check everything is navigable through screenreader
- general HTML cleanup: remove unnecessary nested elements
This task is open for anyone to participate in
|
1.0
|
Clean HTML - - semantics: ensure proper tags are used
- use accessibility (a11y) tools to check everything is navigable through screenreader
- general HTML cleanup: remove unnecessary nested elements
This task is open for anyone to participate in
|
non_test
|
clean html semantics ensure proper tags are used use accessibility tools to check everything is navigable through screenreader general html cleanup remove unnecessary nested elements this task is open for anyone to participate in
| 0
|
340,120
| 30,493,118,529
|
IssuesEvent
|
2023-07-18 09:02:23
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: acceptance/decommission-self failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-kv
|
roachtest.acceptance/decommission-self [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=artifacts#/acceptance/decommission-self) on master @ [7675ca4998134028f0623e04737b5cb69fcc33a9](https://github.com/cockroachdb/cockroach/commits/7675ca4998134028f0623e04737b5cb69fcc33a9):
```
(cluster.go:2180).Start: ~ COCKROACH_CONNECT_TIMEOUT=1200 ././cockroach sql --url 'postgres://root@localhost:26257?sslmode=disable' -e "CREATE SCHEDULE IF NOT EXISTS test_only_backup FOR BACKUP INTO 'gs://cockroach-backup-testing-private/roachprod-scheduled-backups/teamcity-10950435-1689659335-01-n4cpu4/1689659647424684540?AUTH=implicit' RECURRING '*/15 * * * *' FULL BACKUP '@hourly' WITH SCHEDULE OPTIONS first_run = 'now'"
ERROR: unexpected error occurred when checking for existing backups in gs://cockroach-backup-testing-private/roachprod-scheduled-backups/teamcity-10950435-1689659335-01-n4cpu4/1689659647424684540?AUTH=implicit: unable to list files in gcs bucket: googleapi: Error 403: 21965078311-compute@developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist).
SQLSTATE: 58030
Failed running "sql": COMMAND_PROBLEM: exit status 1
test artifacts and logs in: /artifacts/acceptance/decommission-self/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*acceptance/decommission-self.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: acceptance/decommission-self failed - roachtest.acceptance/decommission-self [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=artifacts#/acceptance/decommission-self) on master @ [7675ca4998134028f0623e04737b5cb69fcc33a9](https://github.com/cockroachdb/cockroach/commits/7675ca4998134028f0623e04737b5cb69fcc33a9):
```
(cluster.go:2180).Start: ~ COCKROACH_CONNECT_TIMEOUT=1200 ././cockroach sql --url 'postgres://root@localhost:26257?sslmode=disable' -e "CREATE SCHEDULE IF NOT EXISTS test_only_backup FOR BACKUP INTO 'gs://cockroach-backup-testing-private/roachprod-scheduled-backups/teamcity-10950435-1689659335-01-n4cpu4/1689659647424684540?AUTH=implicit' RECURRING '*/15 * * * *' FULL BACKUP '@hourly' WITH SCHEDULE OPTIONS first_run = 'now'"
ERROR: unexpected error occurred when checking for existing backups in gs://cockroach-backup-testing-private/roachprod-scheduled-backups/teamcity-10950435-1689659335-01-n4cpu4/1689659647424684540?AUTH=implicit: unable to list files in gcs bucket: googleapi: Error 403: 21965078311-compute@developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist).
SQLSTATE: 58030
Failed running "sql": COMMAND_PROBLEM: exit status 1
test artifacts and logs in: /artifacts/acceptance/decommission-self/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*acceptance/decommission-self.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest acceptance decommission self failed roachtest acceptance decommission self with on master cluster go start cockroach connect timeout cockroach sql url postgres root localhost sslmode disable e create schedule if not exists test only backup for backup into gs cockroach backup testing private roachprod scheduled backups teamcity auth implicit recurring full backup hourly with schedule options first run now error unexpected error occurred when checking for existing backups in gs cockroach backup testing private roachprod scheduled backups teamcity auth implicit unable to list files in gcs bucket googleapi error compute developer gserviceaccount com does not have storage objects list access to the google cloud storage bucket permission storage objects list denied on resource or it may not exist sqlstate failed running sql command problem exit status test artifacts and logs in artifacts acceptance decommission self run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb kv triage
| 1
|
132,918
| 10,773,385,320
|
IssuesEvent
|
2019-11-02 20:19:28
|
mattimaier/bnote
|
https://api.github.com/repos/mattimaier/bnote
|
closed
|
Modulauswahl für BNote App
|
User Request enhancement to test
|
Die Möglichkeit, den Mitgliedern einzelne Module zuzuordnen (oder eben nicht) im Hauptprogramm ist sehr gut. Wichtig wäre es, wenn das auch für die App funktionieren würde.
Konkret: Wegen DSGVO habe ich das Mitglieder-Modul nicht zugewiesen, weil sonst jeder alle Kontaktdaten sehen kann. In der App werden aber dennoch alle Daten angezeigt.
|
1.0
|
Modulauswahl für BNote App - Die Möglichkeit, den Mitgliedern einzelne Module zuzuordnen (oder eben nicht) im Hauptprogramm ist sehr gut. Wichtig wäre es, wenn das auch für die App funktionieren würde.
Konkret: Wegen DSGVO habe ich das Mitglieder-Modul nicht zugewiesen, weil sonst jeder alle Kontaktdaten sehen kann. In der App werden aber dennoch alle Daten angezeigt.
|
test
|
modulauswahl für bnote app die möglichkeit den mitgliedern einzelne module zuzuordnen oder eben nicht im hauptprogramm ist sehr gut wichtig wäre es wenn das auch für die app funktionieren würde konkret wegen dsgvo habe ich das mitglieder modul nicht zugewiesen weil sonst jeder alle kontaktdaten sehen kann in der app werden aber dennoch alle daten angezeigt
| 1
|
432,552
| 30,287,508,849
|
IssuesEvent
|
2023-07-08 21:46:46
|
meyruiz/MACS-Soen6011summer2023
|
https://api.github.com/repos/meyruiz/MACS-Soen6011summer2023
|
opened
|
Sprint 2 - Proposed Plan for Next Sprint
|
documentation
|
## Open issues or plan for next Sprint (Sprint 2)
In Sprint 2, the proposed focus could be on the completion of the User Registration and Authentication, Job Posting, and Student Profile Creation features, if they are not fully completed during Sprint 1. We could also begin work on more advanced features of the Job Posting and Management, and the Student/Candidate Profile Management.
#### 1. Completion of User Registration and Authentication
If any tasks related to User Registration and Authentication are not completed in Sprint 1, they should be finished in Sprint 2.
#### 2. Completion of Job Posting
If any tasks related to Job Posting are not completed in Sprint 1, they should be finished in Sprint 2.
#### 3. Completion of Student Profile Creation
If any tasks related to Student Profile Creation are not completed in Sprint 1, they should be finished in Sprint 2.
#### 4. Advanced Job Posting and Management
Depending on the progress made during Sprint 1, we can start working on more advanced features in the Job Posting and Management user stories. This could include tasks such as:
- As an employer, I can review applications submitted by individuals who apply for a specific job posting so that I can assess their qualifications, skills, and experiences in relation to the job requirements.
- As an employer, I can track job posts to see the candidates who applied.
- As an employer, I can offer an interview opportunity to a candidate who is satisfied with the job description
- As an employer, I can reject an application from a candidate so that I can effectively manage the hiring process and provide timely feedback to applicants
- As an employer, I can receive a notification when a candidate has applied to a job posting so that I can respond to the candidate as soon as possible.
#### 5. Advanced Student/Candidate Profile Management
Depending on the progress made during Sprint 1, we can start working on more advanced features in the Student/Candidate Profile Management user stories. This could include tasks such as:
- As a student/candidate, I can update my profile so that it reflects my latest personal information and job experience
- As a student/candidate, I can browse all the available job postings so that I can choose the one match with my skill set.
- As a student/candidate, I can track an application that I apply for before so that I will the status of the application
#### 6. Start work on Notification System
We can start laying the groundwork for the Notification System in Sprint 2, with the aim of fully implementing it in Sprint 3. Tasks could include:
- Design and implement the notifications UI (Front-end).
- Implement notification creation and delivery mechanisms (Back-end).
- Implement APIs to retrieve notifications (Back-end).
|
1.0
|
Sprint 2 - Proposed Plan for Next Sprint - ## Open issues or plan for next Sprint (Sprint 2)
In Sprint 2, the proposed focus could be on the completion of the User Registration and Authentication, Job Posting, and Student Profile Creation features, if they are not fully completed during Sprint 1. We could also begin work on more advanced features of the Job Posting and Management, and the Student/Candidate Profile Management.
#### 1. Completion of User Registration and Authentication
If any tasks related to User Registration and Authentication are not completed in Sprint 1, they should be finished in Sprint 2.
#### 2. Completion of Job Posting
If any tasks related to Job Posting are not completed in Sprint 1, they should be finished in Sprint 2.
#### 3. Completion of Student Profile Creation
If any tasks related to Student Profile Creation are not completed in Sprint 1, they should be finished in Sprint 2.
#### 4. Advanced Job Posting and Management
Depending on the progress made during Sprint 1, we can start working on more advanced features in the Job Posting and Management user stories. This could include tasks such as:
- As an employer, I can review applications submitted by individuals who apply for a specific job posting so that I can assess their qualifications, skills, and experiences in relation to the job requirements.
- As an employer, I can track job posts to see the candidates who applied.
- As an employer, I can offer an interview opportunity to a candidate who is satisfied with the job description
- As an employer, I can reject an application from a candidate so that I can effectively manage the hiring process and provide timely feedback to applicants
- As an employer, I can receive a notification when a candidate has applied to a job posting so that I can respond to the candidate as soon as possible.
#### 5. Advanced Student/Candidate Profile Management
Depending on the progress made during Sprint 1, we can start working on more advanced features in the Student/Candidate Profile Management user stories. This could include tasks such as:
- As a student/candidate, I can update my profile so that it reflects my latest personal information and job experience
- As a student/candidate, I can browse all the available job postings so that I can choose the one match with my skill set.
- As a student/candidate, I can track an application that I apply for before so that I will the status of the application
#### 6. Start work on Notification System
We can start laying the groundwork for the Notification System in Sprint 2, with the aim of fully implementing it in Sprint 3. Tasks could include:
- Design and implement the notifications UI (Front-end).
- Implement notification creation and delivery mechanisms (Back-end).
- Implement APIs to retrieve notifications (Back-end).
|
non_test
|
sprint proposed plan for next sprint open issues or plan for next sprint sprint in sprint the proposed focus could be on the completion of the user registration and authentication job posting and student profile creation features if they are not fully completed during sprint we could also begin work on more advanced features of the job posting and management and the student candidate profile management completion of user registration and authentication if any tasks related to user registration and authentication are not completed in sprint they should be finished in sprint completion of job posting if any tasks related to job posting are not completed in sprint they should be finished in sprint completion of student profile creation if any tasks related to student profile creation are not completed in sprint they should be finished in sprint advanced job posting and management depending on the progress made during sprint we can start working on more advanced features in the job posting and management user stories this could include tasks such as as an employer i can review applications submitted by individuals who apply for a specific job posting so that i can assess their qualifications skills and experiences in relation to the job requirements as an employer i can track job posts to see the candidates who applied as an employer i can offer an interview opportunity to a candidate who is satisfied with the job description as an employer i can reject an application from a candidate so that i can effectively manage the hiring process and provide timely feedback to applicants as an employer i can receive a notification when a candidate has applied to a job posting so that i can respond to the candidate as soon as possible advanced student candidate profile management depending on the progress made during sprint we can start working on more advanced features in the student candidate profile management user stories this could include tasks such as as a student candidate i can update my profile so that it reflects my latest personal information and job experience as a student candidate i can browse all the available job postings so that i can choose the one match with my skill set as a student candidate i can track an application that i apply for before so that i will the status of the application start work on notification system we can start laying the groundwork for the notification system in sprint with the aim of fully implementing it in sprint tasks could include design and implement the notifications ui front end implement notification creation and delivery mechanisms back end implement apis to retrieve notifications back end
| 0
|
212,811
| 23,956,016,374
|
IssuesEvent
|
2022-09-12 15:01:09
|
quarkusio/quarkus
|
https://api.github.com/repos/quarkusio/quarkus
|
closed
|
native image JWT sign java.net.MalformedURLException: no protocol
|
kind/bug area/security area/smallrye area/native-image
|
### Describe the bug
I have an application that creates a JWT and signs it. The following error occurs:
`java.net.MalformedURLException: no protocol: privateKey.pem `
As a docker image build everything works.
**application.properties**
```
quarkus.smallrye-jwt.enabled=true
mp.jwt.verify.publickey.location=publicKey.pem
smallrye.jwt.sign.key.location=privateKey.pem
```
### Expected behavior
_No response_
### Actual behavior
_No response_
### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
_No response_
### Output of `java -version`
_No response_
### GraalVM version (if different from Java)
22.2
### Quarkus version or git rev
2.11.2
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
_No response_
### Additional information
_No response_
|
True
|
native image JWT sign java.net.MalformedURLException: no protocol - ### Describe the bug
I have an application that creates a JWT and signs it. The following error occurs:
`java.net.MalformedURLException: no protocol: privateKey.pem `
As a docker image build everything works.
**application.properties**
```
quarkus.smallrye-jwt.enabled=true
mp.jwt.verify.publickey.location=publicKey.pem
smallrye.jwt.sign.key.location=privateKey.pem
```
### Expected behavior
_No response_
### Actual behavior
_No response_
### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
_No response_
### Output of `java -version`
_No response_
### GraalVM version (if different from Java)
22.2
### Quarkus version or git rev
2.11.2
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
_No response_
### Additional information
_No response_
|
non_test
|
native image jwt sign java net malformedurlexception no protocol describe the bug i have an application that creates a jwt and signs it the following error occurs java net malformedurlexception no protocol privatekey pem as a docker image build everything works application properties quarkus smallrye jwt enabled true mp jwt verify publickey location publickey pem smallrye jwt sign key location privatekey pem expected behavior no response actual behavior no response how to reproduce no response output of uname a or ver no response output of java version no response graalvm version if different from java quarkus version or git rev build tool ie output of mvnw version or gradlew version no response additional information no response
| 0
|
29,122
| 7,058,321,611
|
IssuesEvent
|
2018-01-04 19:53:43
|
Microsoft/TypeScript
|
https://api.github.com/repos/Microsoft/TypeScript
|
closed
|
Code Helper using ~100% CPU
|
External VS Code Tracked
|
_From @coreh on December 9, 2017 0:26_
<!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode. -->
<!-- Use Help > Report Issues to prefill these. -->
- VSCode Version: 1.18.1
- OS Version: 10.12.6
I'm getting "Code Helper" stuck at ~100% CPU
<img width="815" alt="screen shot 2017-12-08 at 22 18 24" src="https://user-images.githubusercontent.com/418473/33790326-67568234-dc66-11e7-804a-17e67186907f.png">
```
ps a~ $ ps aux | grep 65462
coreh 65462 100.0 1.7 3492052 281544 ?? R 5:33PM 128:06.61 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/typescript/out/utils/electronForkStart.js /Users/coreh/Projects/zimboli/node_modules/typescript/lib/tsserver.js --useInferredProjectPerProjectRoot --enableTelemetry --cancellationPipeName /var/folders/m1/vk9trwwj3pz25vv0_jt7xtsc0000gn/T/vscode-tscancellation-da48568b3a6dd7891de0.sock* --locale en
```
Steps to Reproduce:
1. Use VS Code for a suficiently long time (say, several hours) on a large enough project.
2. A "Code Helper" process will start to use 100% CPU. Quitting VSCode does not terminate the Code Helper process, it needs to be killed from the Activity Monitor
<!-- Launch with `code --disable-extensions` to check. -->
Reproduces without extensions: _Probably_ not (?)
(It doesn't trigger immediately, or consistently, but rather after several hours using the App, and I haven't been able to conclude if it will not trigger without extensions. However the `ps` output seems to indicate that the offending file is `/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/typescript/out/utils/electronForkStart.js`, so it's the built in TypeScript extension?)
Thanks in advance!
_Copied from original issue: Microsoft/vscode#39949_
|
1.0
|
Code Helper using ~100% CPU - _From @coreh on December 9, 2017 0:26_
<!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode. -->
<!-- Use Help > Report Issues to prefill these. -->
- VSCode Version: 1.18.1
- OS Version: 10.12.6
I'm getting "Code Helper" stuck at ~100% CPU
<img width="815" alt="screen shot 2017-12-08 at 22 18 24" src="https://user-images.githubusercontent.com/418473/33790326-67568234-dc66-11e7-804a-17e67186907f.png">
```
ps a~ $ ps aux | grep 65462
coreh 65462 100.0 1.7 3492052 281544 ?? R 5:33PM 128:06.61 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/typescript/out/utils/electronForkStart.js /Users/coreh/Projects/zimboli/node_modules/typescript/lib/tsserver.js --useInferredProjectPerProjectRoot --enableTelemetry --cancellationPipeName /var/folders/m1/vk9trwwj3pz25vv0_jt7xtsc0000gn/T/vscode-tscancellation-da48568b3a6dd7891de0.sock* --locale en
```
Steps to Reproduce:
1. Use VS Code for a suficiently long time (say, several hours) on a large enough project.
2. A "Code Helper" process will start to use 100% CPU. Quitting VSCode does not terminate the Code Helper process, it needs to be killed from the Activity Monitor
<!-- Launch with `code --disable-extensions` to check. -->
Reproduces without extensions: _Probably_ not (?)
(It doesn't trigger immediately, or consistently, but rather after several hours using the App, and I haven't been able to conclude if it will not trigger without extensions. However the `ps` output seems to indicate that the offending file is `/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/typescript/out/utils/electronForkStart.js`, so it's the built in TypeScript extension?)
Thanks in advance!
_Copied from original issue: Microsoft/vscode#39949_
|
non_test
|
code helper using cpu from coreh on december report issues to prefill these vscode version os version i m getting code helper stuck at cpu img width alt screen shot at src ps a ps aux grep coreh r applications visual studio code app contents frameworks code helper app contents macos code helper applications visual studio code app contents resources app extensions typescript out utils electronforkstart js users coreh projects zimboli node modules typescript lib tsserver js useinferredprojectperprojectroot enabletelemetry cancellationpipename var folders t vscode tscancellation sock locale en steps to reproduce use vs code for a suficiently long time say several hours on a large enough project a code helper process will start to use cpu quitting vscode does not terminate the code helper process it needs to be killed from the activity monitor reproduces without extensions probably not it doesn t trigger immediately or consistently but rather after several hours using the app and i haven t been able to conclude if it will not trigger without extensions however the ps output seems to indicate that the offending file is applications visual studio code app contents resources app extensions typescript out utils electronforkstart js so it s the built in typescript extension thanks in advance copied from original issue microsoft vscode
| 0
|
56,487
| 6,520,625,246
|
IssuesEvent
|
2017-08-28 17:16:36
|
3Blades/react-frontend
|
https://api.github.com/repos/3Blades/react-frontend
|
closed
|
Dev & Demo -- Cannot Delete Resource or Project
|
chore fix test
|
I've tested in both Dev and Demo under the apitester users.
Deleting a resource and Project seem to be broken.
|
1.0
|
Dev & Demo -- Cannot Delete Resource or Project - I've tested in both Dev and Demo under the apitester users.
Deleting a resource and Project seem to be broken.
|
test
|
dev demo cannot delete resource or project i ve tested in both dev and demo under the apitester users deleting a resource and project seem to be broken
| 1
|
145,069
| 19,319,027,328
|
IssuesEvent
|
2021-12-14 01:51:24
|
peterwkc85/selenium-jupiter
|
https://api.github.com/repos/peterwkc85/selenium-jupiter
|
opened
|
CVE-2020-36179 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2020-36179 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /selenium-jupiter/build.gradle</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- docker-client-8.15.2.jar (Root Library)
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36179>CVE-2020-36179</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-36179 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2020-36179 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /selenium-jupiter/build.gradle</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- docker-client-8.15.2.jar (Root Library)
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36179>CVE-2020-36179</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file selenium jupiter build gradle path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy docker client jar root library x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache commons dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
217,078
| 16,835,544,174
|
IssuesEvent
|
2021-06-18 11:32:08
|
lidofinance/lido-dao
|
https://api.github.com/repos/lidofinance/lido-dao
|
closed
|
Add gas usage reporter
|
tests
|
Implement a script that would calculate and save to a file gas usage of the most important transactions:
* Submitting 1 ETH.
* Submitting 32 ETH.
* Depositing 32 ETH, 10 validators.
* Depositing 320 ETH, 10 validators.
* An Oracle reporting beacon data (quorum not reached).
* An Oracle reporting rewards, 10 validators (quorum reached by the report).
* An Oracle reporting balance decrease, 10 validators (quorum reached by the report).
* Submitting 20 keys by a Node Operator.
Ideally, this script should be invoked as part of the test suite.
|
1.0
|
Add gas usage reporter - Implement a script that would calculate and save to a file gas usage of the most important transactions:
* Submitting 1 ETH.
* Submitting 32 ETH.
* Depositing 32 ETH, 10 validators.
* Depositing 320 ETH, 10 validators.
* An Oracle reporting beacon data (quorum not reached).
* An Oracle reporting rewards, 10 validators (quorum reached by the report).
* An Oracle reporting balance decrease, 10 validators (quorum reached by the report).
* Submitting 20 keys by a Node Operator.
Ideally, this script should be invoked as part of the test suite.
|
test
|
add gas usage reporter implement a script that would calculate and save to a file gas usage of the most important transactions submitting eth submitting eth depositing eth validators depositing eth validators an oracle reporting beacon data quorum not reached an oracle reporting rewards validators quorum reached by the report an oracle reporting balance decrease validators quorum reached by the report submitting keys by a node operator ideally this script should be invoked as part of the test suite
| 1
|
429,549
| 12,425,976,529
|
IssuesEvent
|
2020-05-24 18:53:54
|
bounswe/bounswe2020group4
|
https://api.github.com/repos/bounswe/bounswe2020group4
|
closed
|
Nearest Hospital Endpoint Null Response
|
Bug Priority: Critical Status: Blocked Status: Needs Review
|
Nearest Hospital endpoint returns null response when after it receives the body.
Steps to reproduce the behavior:
1. Go to Postman
2. Set lat, long and radius values to 41.0862, 29.0444, 2000, respectively.
3. Scroll down to result.
4. Count should be 0, and the names array should be empty
It is expected to receive nearest hospitals. For these values, "Baltalimanı Family Health Center", "Üsküdar University NP Etiler Medical Center" is expected as the correct response with count value being equal to "2".
The screenshot for the test in postman is below.

|
1.0
|
Nearest Hospital Endpoint Null Response - Nearest Hospital endpoint returns null response when after it receives the body.
Steps to reproduce the behavior:
1. Go to Postman
2. Set lat, long and radius values to 41.0862, 29.0444, 2000, respectively.
3. Scroll down to result.
4. Count should be 0, and the names array should be empty
It is expected to receive nearest hospitals. For these values, "Baltalimanı Family Health Center", "Üsküdar University NP Etiler Medical Center" is expected as the correct response with count value being equal to "2".
The screenshot for the test in postman is below.

|
non_test
|
nearest hospital endpoint null response nearest hospital endpoint returns null response when after it receives the body steps to reproduce the behavior go to postman set lat long and radius values to respectively scroll down to result count should be and the names array should be empty it is expected to receive nearest hospitals for these values baltalimanı family health center üsküdar university np etiler medical center is expected as the correct response with count value being equal to the screenshot for the test in postman is below
| 0
|
17,022
| 3,590,096,885
|
IssuesEvent
|
2016-02-01 02:26:28
|
sass/libsass
|
https://api.github.com/repos/sass/libsass
|
opened
|
Regression in parent reference interpolation in selector
|
Bug - Confirmed Bug - Regression Dev - Needs Test Release - Blocker
|
Original report https://github.com/sass/node-sass/issues/1365
```sass
.foo {
&--#{'bar'} {
color: red;
}
}
```
LibSass 3.3.2
```css
.foo--bar {
color: red; }
```
LibSass 3.3.3
```
Error: Invalid CSS after ".foo {": expected "}", was "&--#{$bar} {"
on line 3 of test.scss
>> .foo {
```
|
1.0
|
Regression in parent reference interpolation in selector - Original report https://github.com/sass/node-sass/issues/1365
```sass
.foo {
&--#{'bar'} {
color: red;
}
}
```
LibSass 3.3.2
```css
.foo--bar {
color: red; }
```
LibSass 3.3.3
```
Error: Invalid CSS after ".foo {": expected "}", was "&--#{$bar} {"
on line 3 of test.scss
>> .foo {
```
|
test
|
regression in parent reference interpolation in selector original report sass foo bar color red libsass css foo bar color red libsass error invalid css after foo expected was bar on line of test scss foo
| 1
|
116,886
| 9,887,053,390
|
IssuesEvent
|
2019-06-25 08:20:55
|
zeroc-ice/ice
|
https://api.github.com/repos/zeroc-ice/ice
|
closed
|
Ice/exceptions failures on archlinux
|
testsuite
|
The test is failing with several language mappings based on C++.
```
[ running client/server with 1.0 encoding test - 03/06/19 13:51:33 ]
- Config: tcp,cpp11-shared
(/home/vagrant/workspace/ice/3.7/cpp/test/Ice/exceptions/build/x86_64/cpp11-shared/server --Ice.Default.Host=127.0.0.1 --Test.BasePort=14000 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.EncodingVersion=1.0 --Ice.ThreadPool.Server.Size=1 --Ice.ThreadPool.Server.SizeMax=3 --Ice.ThreadPool.Server.SizeWarn=0 --Ice.PrintAdapterReady=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib'})
(/home/vagrant/workspace/ice/3.7/cpp/test/Ice/exceptions/build/x86_64/cpp11-shared/client --Ice.Default.Host=127.0.0.1 --Test.BasePort=14000 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.EncodingVersion=1.0 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib'})
testing ice_print()/what()... ok
testing object adapter registration exceptions... ok
testing servant registration exceptions... ok
testing servant locator registrations exceptions... ok
testing value factory registration exception... ok
testing stringToProxy... ok
testing checked cast... ok
catching exact types... ok
catching base types... ok
catching derived types... ok
catching unknown user exception... ok
testing memory limit marshal exception...src/Ice/ConnectionI.cpp:673: ::Ice::ConnectionTimeoutException:
connection has timed out
failed!
test/Ice/exceptions/AllTests.cpp:995: assertion `false' failed
```
```
[ running client/server with sliced format test - 03/06/19 13:48:41 ]
- Config: tcp,shared,es5
("/usr/bin/python" /home/vagrant/workspace/ice/3.7/python/test/TestHelper.py Server --Ice.Default.Host=127.0.0.1 --Test.BasePort=14100 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.Default.SlicedFormat=1 --Ice.ThreadPool.Server.Size=1 --Ice.ThreadPool.Server.SizeMax=3 --Ice.ThreadPool.Server.SizeWarn=0 --Ice.PrintAdapterReady=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib', 'PYTHONPATH': '/home/vagrant/workspace/ice/3.7/python/python:/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions'})
("/usr/bin/python" /home/vagrant/workspace/ice/3.7/python/test/TestHelper.py Client --Ice.Default.Host=127.0.0.1 --Test.BasePort=14100 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.Default.SlicedFormat=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib', 'PYTHONPATH': '/home/vagrant/workspace/ice/3.7/python/python:/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions'})
testing servant registration exceptions... ok
testing servant locator registrations exceptions... ok
testing object factory registration exception... ok
testing stringToProxy... ok
testing checked cast... ok
catching exact types... ok
catching base types... ok
catching derived types... ok
catching unknown user exception... ok
testing memory limit marshal exception...Traceback (most recent call last):
File "/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions/AllTests.py", line 475, in allTests
thrower.throwMemoryLimitException(bytearray(20 * 1024)) # 20KB
File "Test.ice", line 381, in throwMemoryLimitException
Ice.ConnectionTimeoutException: exception ::Ice::ConnectionTimeoutException
{
}
```
|
1.0
|
Ice/exceptions failures on archlinux - The test is failing with several language mappings based on C++.
```
[ running client/server with 1.0 encoding test - 03/06/19 13:51:33 ]
- Config: tcp,cpp11-shared
(/home/vagrant/workspace/ice/3.7/cpp/test/Ice/exceptions/build/x86_64/cpp11-shared/server --Ice.Default.Host=127.0.0.1 --Test.BasePort=14000 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.EncodingVersion=1.0 --Ice.ThreadPool.Server.Size=1 --Ice.ThreadPool.Server.SizeMax=3 --Ice.ThreadPool.Server.SizeWarn=0 --Ice.PrintAdapterReady=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib'})
(/home/vagrant/workspace/ice/3.7/cpp/test/Ice/exceptions/build/x86_64/cpp11-shared/client --Ice.Default.Host=127.0.0.1 --Test.BasePort=14000 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.EncodingVersion=1.0 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib'})
testing ice_print()/what()... ok
testing object adapter registration exceptions... ok
testing servant registration exceptions... ok
testing servant locator registrations exceptions... ok
testing value factory registration exception... ok
testing stringToProxy... ok
testing checked cast... ok
catching exact types... ok
catching base types... ok
catching derived types... ok
catching unknown user exception... ok
testing memory limit marshal exception...src/Ice/ConnectionI.cpp:673: ::Ice::ConnectionTimeoutException:
connection has timed out
failed!
test/Ice/exceptions/AllTests.cpp:995: assertion `false' failed
```
```
[ running client/server with sliced format test - 03/06/19 13:48:41 ]
- Config: tcp,shared,es5
("/usr/bin/python" /home/vagrant/workspace/ice/3.7/python/test/TestHelper.py Server --Ice.Default.Host=127.0.0.1 --Test.BasePort=14100 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.Default.SlicedFormat=1 --Ice.ThreadPool.Server.Size=1 --Ice.ThreadPool.Server.SizeMax=3 --Ice.ThreadPool.Server.SizeWarn=0 --Ice.PrintAdapterReady=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib', 'PYTHONPATH': '/home/vagrant/workspace/ice/3.7/python/python:/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions'})
("/usr/bin/python" /home/vagrant/workspace/ice/3.7/python/test/TestHelper.py Client --Ice.Default.Host=127.0.0.1 --Test.BasePort=14100 --Ice.Warn.Connections=1 --Ice.Default.Protocol=tcp --Ice.IPv6=0 --Ice.Default.SlicedFormat=1 env={'LD_LIBRARY_PATH': '/home/vagrant/workspace/ice/3.7/cpp/lib', 'PYTHONPATH': '/home/vagrant/workspace/ice/3.7/python/python:/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions'})
testing servant registration exceptions... ok
testing servant locator registrations exceptions... ok
testing object factory registration exception... ok
testing stringToProxy... ok
testing checked cast... ok
catching exact types... ok
catching base types... ok
catching derived types... ok
catching unknown user exception... ok
testing memory limit marshal exception...Traceback (most recent call last):
File "/home/vagrant/workspace/ice/3.7/python/test/Ice/exceptions/AllTests.py", line 475, in allTests
thrower.throwMemoryLimitException(bytearray(20 * 1024)) # 20KB
File "Test.ice", line 381, in throwMemoryLimitException
Ice.ConnectionTimeoutException: exception ::Ice::ConnectionTimeoutException
{
}
```
|
test
|
ice exceptions failures on archlinux the test is failing with several language mappings based on c config tcp shared home vagrant workspace ice cpp test ice exceptions build shared server ice default host test baseport ice warn connections ice default protocol tcp ice ice nullhandleabort ice printstacktraces ice default encodingversion ice threadpool server size ice threadpool server sizemax ice threadpool server sizewarn ice printadapterready env ld library path home vagrant workspace ice cpp lib home vagrant workspace ice cpp test ice exceptions build shared client ice default host test baseport ice warn connections ice default protocol tcp ice ice nullhandleabort ice printstacktraces ice default encodingversion env ld library path home vagrant workspace ice cpp lib testing ice print what ok testing object adapter registration exceptions ok testing servant registration exceptions ok testing servant locator registrations exceptions ok testing value factory registration exception ok testing stringtoproxy ok testing checked cast ok catching exact types ok catching base types ok catching derived types ok catching unknown user exception ok testing memory limit marshal exception src ice connectioni cpp ice connectiontimeoutexception connection has timed out failed test ice exceptions alltests cpp assertion false failed config tcp shared usr bin python home vagrant workspace ice python test testhelper py server ice default host test baseport ice warn connections ice default protocol tcp ice ice default slicedformat ice threadpool server size ice threadpool server sizemax ice threadpool server sizewarn ice printadapterready env ld library path home vagrant workspace ice cpp lib pythonpath home vagrant workspace ice python python home vagrant workspace ice python test ice exceptions usr bin python home vagrant workspace ice python test testhelper py client ice default host test baseport ice warn connections ice default protocol tcp ice ice default slicedformat env ld library path home vagrant workspace ice cpp lib pythonpath home vagrant workspace ice python python home vagrant workspace ice python test ice exceptions testing servant registration exceptions ok testing servant locator registrations exceptions ok testing object factory registration exception ok testing stringtoproxy ok testing checked cast ok catching exact types ok catching base types ok catching derived types ok catching unknown user exception ok testing memory limit marshal exception traceback most recent call last file home vagrant workspace ice python test ice exceptions alltests py line in alltests thrower throwmemorylimitexception bytearray file test ice line in throwmemorylimitexception ice connectiontimeoutexception exception ice connectiontimeoutexception
| 1
|
9,744
| 4,630,143,539
|
IssuesEvent
|
2016-09-28 11:48:15
|
LaxarJS/laxar-patterns
|
https://api.github.com/repos/LaxarJS/laxar-patterns
|
closed
|
json-patch: use as fast-json-patch
|
epic: buildsystem type: enhancement
|
We currently import `fast-json-patch` as `json-patch`. Using bower it was easy to make a mapping for it. Since SystemJS handles the mapping configuration by itself, this isn't as easy anymore.
To simply avoid this problem, we should always use the package as it is named in its `package.json`, which in this case is `fast-json-patch`.
|
1.0
|
json-patch: use as fast-json-patch - We currently import `fast-json-patch` as `json-patch`. Using bower it was easy to make a mapping for it. Since SystemJS handles the mapping configuration by itself, this isn't as easy anymore.
To simply avoid this problem, we should always use the package as it is named in its `package.json`, which in this case is `fast-json-patch`.
|
non_test
|
json patch use as fast json patch we currently import fast json patch as json patch using bower it was easy to make a mapping for it since systemjs handles the mapping configuration by itself this isn t as easy anymore to simply avoid this problem we should always use the package as it is named in its package json which in this case is fast json patch
| 0
|
65,638
| 6,971,189,878
|
IssuesEvent
|
2017-12-11 13:09:57
|
nwjs/nw.js
|
https://api.github.com/repos/nwjs/nw.js
|
closed
|
Regression in 0.27 user agent handling breaks Construct 2 content
|
test-todo
|
NWJS Version : 0.27.0
Operating System : Windows 10
### Expected behavior
With NW.js 0.27, if package.json specifies a `user-agent`, it is not applied to navigation requests. It is correctly applied to all subresource requests, but not the main HTML document request. NW.js 0.26 was not affected. This has a serious impact for Construct 2, since it relies on the correct user agent (which specifies NW.js) in order to enable NW.js features. (E.g. this report: https://www.scirra.com/forum/nw-js-plugin-broken-in-preview-v0-27-0_t198544)
### Actual behavior
With NW.js 0.27, the navigation request reverts to using the default user agent ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"), ignoring the user agent set in package.json.
### How to reproduce
Use the following package.json. Note the term "MySuperUserAgent" in the UA string.
```
{
"main": "https://nwjs.io/",
"name": "UA test",
"user-agent": "Mozilla/5.0 (%osinfo) AppleWebKit/%webkit_ver (KHTML, like Gecko, Chrome, Safari) MySuperUserAgent/1.0"
}
```
Launch NW.js and open dev tools to the network tab. You may need to reload the page with cache disabled to see all requests. Observe HTTP request headers for:
1) the first nwjs.io request (UA is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36)
2) a later subresource request, e.g. for index.js (UA is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (@f340e0691c0c44d924c05db57ed0c8324064dcda) (KHTML, like Gecko, Chrome, Safari) MySuperUserAgent/1.0)
Note that "MySuperUserAgent" does not occur in 1) but does in 2).
|
1.0
|
Regression in 0.27 user agent handling breaks Construct 2 content - NWJS Version : 0.27.0
Operating System : Windows 10
### Expected behavior
With NW.js 0.27, if package.json specifies a `user-agent`, it is not applied to navigation requests. It is correctly applied to all subresource requests, but not the main HTML document request. NW.js 0.26 was not affected. This has a serious impact for Construct 2, since it relies on the correct user agent (which specifies NW.js) in order to enable NW.js features. (E.g. this report: https://www.scirra.com/forum/nw-js-plugin-broken-in-preview-v0-27-0_t198544)
### Actual behavior
With NW.js 0.27, the navigation request reverts to using the default user agent ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"), ignoring the user agent set in package.json.
### How to reproduce
Use the following package.json. Note the term "MySuperUserAgent" in the UA string.
```
{
"main": "https://nwjs.io/",
"name": "UA test",
"user-agent": "Mozilla/5.0 (%osinfo) AppleWebKit/%webkit_ver (KHTML, like Gecko, Chrome, Safari) MySuperUserAgent/1.0"
}
```
Launch NW.js and open dev tools to the network tab. You may need to reload the page with cache disabled to see all requests. Observe HTTP request headers for:
1) the first nwjs.io request (UA is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36)
2) a later subresource request, e.g. for index.js (UA is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (@f340e0691c0c44d924c05db57ed0c8324064dcda) (KHTML, like Gecko, Chrome, Safari) MySuperUserAgent/1.0)
Note that "MySuperUserAgent" does not occur in 1) but does in 2).
|
test
|
regression in user agent handling breaks construct content nwjs version operating system windows expected behavior with nw js if package json specifies a user agent it is not applied to navigation requests it is correctly applied to all subresource requests but not the main html document request nw js was not affected this has a serious impact for construct since it relies on the correct user agent which specifies nw js in order to enable nw js features e g this report actual behavior with nw js the navigation request reverts to using the default user agent mozilla windows nt applewebkit khtml like gecko chrome safari ignoring the user agent set in package json how to reproduce use the following package json note the term mysuperuseragent in the ua string main name ua test user agent mozilla osinfo applewebkit webkit ver khtml like gecko chrome safari mysuperuseragent launch nw js and open dev tools to the network tab you may need to reload the page with cache disabled to see all requests observe http request headers for the first nwjs io request ua is mozilla windows nt applewebkit khtml like gecko chrome safari a later subresource request e g for index js ua is mozilla windows nt applewebkit khtml like gecko chrome safari mysuperuseragent note that mysuperuseragent does not occur in but does in
| 1
|
46,820
| 6,028,800,861
|
IssuesEvent
|
2017-06-08 16:30:13
|
wpaccessibility/settings-api-enhanced
|
https://api.github.com/repos/wpaccessibility/settings-api-enhanced
|
closed
|
Bold labels
|
design
|
I think it would be good to bold all labels. For example this is what it would look like:

My thinking it this visually helps distinction. Note, the screenshot includes the font increase I am suggesting too.
|
1.0
|
Bold labels - I think it would be good to bold all labels. For example this is what it would look like:

My thinking it this visually helps distinction. Note, the screenshot includes the font increase I am suggesting too.
|
non_test
|
bold labels i think it would be good to bold all labels for example this is what it would look like my thinking it this visually helps distinction note the screenshot includes the font increase i am suggesting too
| 0
|
231,922
| 18,820,257,897
|
IssuesEvent
|
2021-11-10 07:19:23
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
An error dialog pops up when cloning/pasting one file share
|
🧪 testing :gear: files :beetle: regression
|
**Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211110.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous releases (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Select one file share -> Clone the file share with a valid name.
3. Check whether succeed to clone.
## Expected Experience ##
Succeed to clone.
## Actual Experience ##
An error dialog pops up.


## More Info ##
1. There is an ongoing activity log even though the error dialog is closed.

2. The cloned file share displays on the tree view.
|
1.0
|
An error dialog pops up when cloning/pasting one file share - **Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211110.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous releases (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Select one file share -> Clone the file share with a valid name.
3. Check whether succeed to clone.
## Expected Experience ##
Succeed to clone.
## Actual Experience ##
An error dialog pops up.


## More Info ##
1. There is an ongoing activity log even though the error dialog is closed.

2. The cloned file share displays on the tree view.
|
test
|
an error dialog pops up when cloning pasting one file share storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey architecture how found from running test cases regression from previous releases steps to reproduce expand one storage account file shares select one file share clone the file share with a valid name check whether succeed to clone expected experience succeed to clone actual experience an error dialog pops up more info there is an ongoing activity log even though the error dialog is closed the cloned file share displays on the tree view
| 1
|
56,722
| 6,528,022,435
|
IssuesEvent
|
2017-08-30 05:05:05
|
Interaktivtechnology/Raimon-Web
|
https://api.github.com/repos/Interaktivtechnology/Raimon-Web
|
closed
|
New Building layout - After searching for the project and click on it, the screen becomes darker
|
bug Monitor Need Testing
|
@benjaminl83 @purnomoeko @chuanhiang I am unable to continue with the grid building. After clicking on the project, it bounces me to this page right away without letting me choose the grids.

|
1.0
|
New Building layout - After searching for the project and click on it, the screen becomes darker - @benjaminl83 @purnomoeko @chuanhiang I am unable to continue with the grid building. After clicking on the project, it bounces me to this page right away without letting me choose the grids.

|
test
|
new building layout after searching for the project and click on it the screen becomes darker purnomoeko chuanhiang i am unable to continue with the grid building after clicking on the project it bounces me to this page right away without letting me choose the grids
| 1
|
6,620
| 2,854,335,499
|
IssuesEvent
|
2015-06-01 23:39:37
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
opened
|
Unable to run e2e on dev cluster
|
priority/P1 team/testing
|
I used to be able to run e2e tests against my development cluster. I am unable to do so anymore.
```shell
$ hack/ginkgo-e2e.sh --ginkgo.focus=dns
Setting up for KUBERNETES_PROVIDER="gce".
Project: xyz
Zone: us-central1-b
ERROR: (gcloud.compute.instances.describe) Could not fetch resource:
- The resource 'projects/xyz/zones/us-central1-b/instances/e2e-test-me-master' was not found
```
```shell
$ godep go run hack/e2e.go -test --test_args="--ginkgo.focus=DNS"
2015/06/01 16:35:44 Running: get status
2015/06/01 16:35:46 Running: Ginkgo tests
2015/06/01 16:35:48 Error running Ginkgo tests: exit status 1
exit status 1
godep: go exit status 1
```
|
1.0
|
Unable to run e2e on dev cluster - I used to be able to run e2e tests against my development cluster. I am unable to do so anymore.
```shell
$ hack/ginkgo-e2e.sh --ginkgo.focus=dns
Setting up for KUBERNETES_PROVIDER="gce".
Project: xyz
Zone: us-central1-b
ERROR: (gcloud.compute.instances.describe) Could not fetch resource:
- The resource 'projects/xyz/zones/us-central1-b/instances/e2e-test-me-master' was not found
```
```shell
$ godep go run hack/e2e.go -test --test_args="--ginkgo.focus=DNS"
2015/06/01 16:35:44 Running: get status
2015/06/01 16:35:46 Running: Ginkgo tests
2015/06/01 16:35:48 Error running Ginkgo tests: exit status 1
exit status 1
godep: go exit status 1
```
|
test
|
unable to run on dev cluster i used to be able to run tests against my development cluster i am unable to do so anymore shell hack ginkgo sh ginkgo focus dns setting up for kubernetes provider gce project xyz zone us b error gcloud compute instances describe could not fetch resource the resource projects xyz zones us b instances test me master was not found shell godep go run hack go test test args ginkgo focus dns running get status running ginkgo tests error running ginkgo tests exit status exit status godep go exit status
| 1
|
75,651
| 20,921,116,780
|
IssuesEvent
|
2022-03-24 17:28:45
|
rstudio/rstudio
|
https://api.github.com/repos/rstudio/rstudio
|
closed
|
Upgrade Microsoft toolchain used for builds
|
in progress builds developer
|
We build RStudio for Windows using specific versions of the WIndows SDK and the Visual Studio toolchain.
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/docker/jenkins/Dockerfile.windows#L23
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/docker/jenkins/Dockerfile.windows#L35-L36
Our dev machine bootstrapper does the same:
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/dependencies/windows/Install-RStudio-Prereqs.ps1#L73-L75
Eventually we'll need to move to a newer major version (currently on MSVC 2017-era tools). We can wait until something forces a switch, or do it preemptively.
|
1.0
|
Upgrade Microsoft toolchain used for builds - We build RStudio for Windows using specific versions of the WIndows SDK and the Visual Studio toolchain.
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/docker/jenkins/Dockerfile.windows#L23
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/docker/jenkins/Dockerfile.windows#L35-L36
Our dev machine bootstrapper does the same:
https://github.com/rstudio/rstudio/blob/466d03b0ed61d66f3a80e7c2e530fb386818d9b4/dependencies/windows/Install-RStudio-Prereqs.ps1#L73-L75
Eventually we'll need to move to a newer major version (currently on MSVC 2017-era tools). We can wait until something forces a switch, or do it preemptively.
|
non_test
|
upgrade microsoft toolchain used for builds we build rstudio for windows using specific versions of the windows sdk and the visual studio toolchain our dev machine bootstrapper does the same eventually we ll need to move to a newer major version currently on msvc era tools we can wait until something forces a switch or do it preemptively
| 0
|
8,875
| 6,669,724,156
|
IssuesEvent
|
2017-10-03 20:25:36
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Are we leaking Paragraphs?
|
performance
|
Although I think the answer is no, here's what led me to the question.
Using Observatory I looked at the "Persistent Handles" page while running the Gallery demo:
- showing the home screen: 42 Paragraphs
- expanded all menus: 79 Paragraphs
- scrolling the home screen for a while: 81 Paragraphs
- launching each demo in turn: 1352 Paragraphs
The heap metrics page says that only 44 Paragraphs are "strongly reachable" and that they only cost about 32K, which sounds reasonable. Unless there's some other dire implication here, I'm not sure there's any problem at all.
|
True
|
Are we leaking Paragraphs? - Although I think the answer is no, here's what led me to the question.
Using Observatory I looked at the "Persistent Handles" page while running the Gallery demo:
- showing the home screen: 42 Paragraphs
- expanded all menus: 79 Paragraphs
- scrolling the home screen for a while: 81 Paragraphs
- launching each demo in turn: 1352 Paragraphs
The heap metrics page says that only 44 Paragraphs are "strongly reachable" and that they only cost about 32K, which sounds reasonable. Unless there's some other dire implication here, I'm not sure there's any problem at all.
|
non_test
|
are we leaking paragraphs although i think the answer is no here s what led me to the question using observatory i looked at the persistent handles page while running the gallery demo showing the home screen paragraphs expanded all menus paragraphs scrolling the home screen for a while paragraphs launching each demo in turn paragraphs the heap metrics page says that only paragraphs are strongly reachable and that they only cost about which sounds reasonable unless there s some other dire implication here i m not sure there s any problem at all
| 0
|
1,818
| 6,777,017,153
|
IssuesEvent
|
2017-10-27 20:14:49
|
sg-s/xolotl
|
https://api.github.com/repos/sg-s/xolotl
|
closed
|
add a hash method to xolotl
|
enhancement weak-architecture
|
so that we can quickly check if binaries match the current state
then we would have to label binaries with the hash too
|
1.0
|
add a hash method to xolotl - so that we can quickly check if binaries match the current state
then we would have to label binaries with the hash too
|
non_test
|
add a hash method to xolotl so that we can quickly check if binaries match the current state then we would have to label binaries with the hash too
| 0
|
107,910
| 23,503,118,645
|
IssuesEvent
|
2022-08-18 10:09:20
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Implement extension endpoints to retrieve type descriptors of given expressions and symbols
|
Type/NewFeature Priority/High Team/LanguageServer/Extensions Team/VSCode
|
**Description:**
$Subject
The endpoint to fetch the type information should accept a list of rages belongs to expressions and return a list of type information along with the requested ranges. Similarly, the other endpoint should accept a list of positions representing the symbols and return associated type information along with the requested positions.
|
1.0
|
Implement extension endpoints to retrieve type descriptors of given expressions and symbols - **Description:**
$Subject
The endpoint to fetch the type information should accept a list of rages belongs to expressions and return a list of type information along with the requested ranges. Similarly, the other endpoint should accept a list of positions representing the symbols and return associated type information along with the requested positions.
|
non_test
|
implement extension endpoints to retrieve type descriptors of given expressions and symbols description subject the endpoint to fetch the type information should accept a list of rages belongs to expressions and return a list of type information along with the requested ranges similarly the other endpoint should accept a list of positions representing the symbols and return associated type information along with the requested positions
| 0
|
18,508
| 3,696,613,870
|
IssuesEvent
|
2016-02-27 03:33:35
|
softlayer/sl-ember-components
|
https://api.github.com/repos/softlayer/sl-ember-components
|
closed
|
Unit | Component | sl date time: Expected Mixins are present
|
ready sl-date-time tests
|
```
not ok 416 PhantomJS 1.9 - Unit | Component | sl date time: Expected Mixins are present
---
actual: >
null
message: >
Died on test #1 at test (http://localhost:7357/assets/test-support.js:3025)
at testWrapper (http://localhost:7357/assets/test-support.js:6192)
at test (http://localhost:7357/assets/test-support.js:6205)
at http://localhost:7357/assets/tests.js:23144
at http://localhost:7357/assets/vendor.js:152
at tryFinally (http://localhost:7357/assets/vendor.js:33)
at http://localhost:7357/assets/vendor.js:158
at http://localhost:7357/assets/test-loader.js:60
at http://localhost:7357/assets/test-loader.js:51
at http://localhost:7357/assets/test-loader.js:82
at http://localhost:7357/assets/test-support.js:6024: 'undefined' is not a function (evaluating 'validTimeZonesArray.includes(this.get('timezone'))')
Log: |
...
```
|
1.0
|
Unit | Component | sl date time: Expected Mixins are present - ```
not ok 416 PhantomJS 1.9 - Unit | Component | sl date time: Expected Mixins are present
---
actual: >
null
message: >
Died on test #1 at test (http://localhost:7357/assets/test-support.js:3025)
at testWrapper (http://localhost:7357/assets/test-support.js:6192)
at test (http://localhost:7357/assets/test-support.js:6205)
at http://localhost:7357/assets/tests.js:23144
at http://localhost:7357/assets/vendor.js:152
at tryFinally (http://localhost:7357/assets/vendor.js:33)
at http://localhost:7357/assets/vendor.js:158
at http://localhost:7357/assets/test-loader.js:60
at http://localhost:7357/assets/test-loader.js:51
at http://localhost:7357/assets/test-loader.js:82
at http://localhost:7357/assets/test-support.js:6024: 'undefined' is not a function (evaluating 'validTimeZonesArray.includes(this.get('timezone'))')
Log: |
...
```
|
test
|
unit component sl date time expected mixins are present not ok phantomjs unit component sl date time expected mixins are present actual null message died on test at test at testwrapper at test at at at tryfinally at at at at at undefined is not a function evaluating validtimezonesarray includes this get timezone log
| 1
|
94,532
| 8,499,229,913
|
IssuesEvent
|
2018-10-29 16:39:09
|
phetsims/expression-exchange
|
https://api.github.com/repos/phetsims/expression-exchange
|
closed
|
CT cannot set property 1 of null
|
type:automated-testing
|
```
expression-exchange : fuzz : require.js : run
Uncaught TypeError: Cannot set property '1' of null
TypeError: Cannot set property '1' of null
at Array.listener (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Multilink.js?bust=1540547285502:39:36)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540547285502:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:201:16)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540547285502:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:201:16)
at BooleanProperty.set value [as value] (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:307:36)
id: Bayes Chrome
Approximately 10/26/2018, 12:42:58 AM
expression-exchange : xss-fuzz : run
Uncaught TypeError: Cannot set property '1' of null
TypeError: Cannot set property '1' of null
at Array.listener (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Multilink.js?bust=1540562013137:39:36)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540562013137:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:201:16)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540562013137:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:201:16)
at BooleanProperty.set value [as value] (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:307:36)
id: Bayes Chrome
Approximately 10/26/2018, 12:42:58 AM
```
|
1.0
|
CT cannot set property 1 of null - ```
expression-exchange : fuzz : require.js : run
Uncaught TypeError: Cannot set property '1' of null
TypeError: Cannot set property '1' of null
at Array.listener (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Multilink.js?bust=1540547285502:39:36)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540547285502:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:201:16)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540547285502:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:201:16)
at BooleanProperty.set value [as value] (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540547285502:307:36)
id: Bayes Chrome
Approximately 10/26/2018, 12:42:58 AM
expression-exchange : xss-fuzz : run
Uncaught TypeError: Cannot set property '1' of null
TypeError: Cannot set property '1' of null
at Array.listener (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Multilink.js?bust=1540562013137:39:36)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540562013137:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:201:16)
at Emitter.emit3 (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Emitter.js?bust=1540562013137:258:52)
at BooleanProperty._notifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:277:29)
at BooleanProperty.setValueAndNotifyListeners (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:258:14)
at BooleanProperty.set (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:201:16)
at BooleanProperty.set value [as value] (https://bayes.colorado.edu/continuous-testing/snapshot-1540536178923/axon/js/Property.js?bust=1540562013137:307:36)
id: Bayes Chrome
Approximately 10/26/2018, 12:42:58 AM
```
|
test
|
ct cannot set property of null expression exchange fuzz require js run uncaught typeerror cannot set property of null typeerror cannot set property of null at array listener at emitter at booleanproperty notifylisteners at booleanproperty setvalueandnotifylisteners at booleanproperty set at emitter at booleanproperty notifylisteners at booleanproperty setvalueandnotifylisteners at booleanproperty set at booleanproperty set value id bayes chrome approximately am expression exchange xss fuzz run uncaught typeerror cannot set property of null typeerror cannot set property of null at array listener at emitter at booleanproperty notifylisteners at booleanproperty setvalueandnotifylisteners at booleanproperty set at emitter at booleanproperty notifylisteners at booleanproperty setvalueandnotifylisteners at booleanproperty set at booleanproperty set value id bayes chrome approximately am
| 1
|
224,572
| 17,759,602,247
|
IssuesEvent
|
2021-08-29 12:47:39
|
dj-stripe/dj-stripe
|
https://api.github.com/repos/dj-stripe/dj-stripe
|
closed
|
Test coverage of admin
|
good first issue tests
|
The dropped fields from #1019 broke a few bits of admin (eg Account._str__ and search, fixed in #1033), needs some test coverage.
For each model, test:
- [ ] list
- [ ] search
- [ ] detail
|
1.0
|
Test coverage of admin - The dropped fields from #1019 broke a few bits of admin (eg Account._str__ and search, fixed in #1033), needs some test coverage.
For each model, test:
- [ ] list
- [ ] search
- [ ] detail
|
test
|
test coverage of admin the dropped fields from broke a few bits of admin eg account str and search fixed in needs some test coverage for each model test list search detail
| 1
|
264,342
| 23,112,569,990
|
IssuesEvent
|
2022-07-27 14:09:10
|
arch-kiosk/arch-kiosk-office
|
https://api.github.com/repos/arch-kiosk/arch-kiosk-office
|
closed
|
QC: file import must reevaluate qc results
|
kiosk test-stage
|
- when importing files via upload or local import qc rules need to be recalculated
|
1.0
|
QC: file import must reevaluate qc results - - when importing files via upload or local import qc rules need to be recalculated
|
test
|
qc file import must reevaluate qc results when importing files via upload or local import qc rules need to be recalculated
| 1
|
58,905
| 7,189,880,459
|
IssuesEvent
|
2018-02-02 15:28:39
|
Automattic/jetpack
|
https://api.github.com/repos/Automattic/jetpack
|
opened
|
Related posts: "try it out" -action is not noticeable
|
Related Posts [Status] Needs Design Review [Type] Enhancement
|
In *"Related posts"* setting panel there's a text: _"You can now also configure related posts in the Customizer. Try it out!"_ —

That _"Try it out!"_ link to **Customizer** feels rather un-noticeable — it could be something like this instead:

|
1.0
|
Related posts: "try it out" -action is not noticeable - In *"Related posts"* setting panel there's a text: _"You can now also configure related posts in the Customizer. Try it out!"_ —

That _"Try it out!"_ link to **Customizer** feels rather un-noticeable — it could be something like this instead:

|
non_test
|
related posts try it out action is not noticeable in related posts setting panel there s a text you can now also configure related posts in the customizer try it out — that try it out link to customizer feels rather un noticeable — it could be something like this instead
| 0
|
70,689
| 23,282,783,980
|
IssuesEvent
|
2022-08-05 13:41:26
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
closed
|
GenApp Manager @BasicGenApp uses imageTag instead of applicationTag
|
defect
|
For some reason, the @BasicGenApp codes the tag name as "imageTag", the tag is for the application, so really should be applicationTag or appTag to avoid confusing it with z/OS Images.
|
1.0
|
GenApp Manager @BasicGenApp uses imageTag instead of applicationTag - For some reason, the @BasicGenApp codes the tag name as "imageTag", the tag is for the application, so really should be applicationTag or appTag to avoid confusing it with z/OS Images.
|
non_test
|
genapp manager basicgenapp uses imagetag instead of applicationtag for some reason the basicgenapp codes the tag name as imagetag the tag is for the application so really should be applicationtag or apptag to avoid confusing it with z os images
| 0
|
201,068
| 15,172,061,521
|
IssuesEvent
|
2021-02-13 06:56:35
|
chef/chef-workstation
|
https://api.github.com/repos/chef/chef-workstation
|
closed
|
omnibus-test.sh needs to try to install workstation multiple times
|
Aspect: Stability Aspect: Testing Triage: Confirmed Type: Bug
|
We often see timeouts installing workstation in the verification stage of the release pipeline. We should make sure we retry the installation so we don't fail a whole build just because omnitruck is flaky.
```
--- Installing unstable chef-workstation 0.10.33
/opt/omnibus-toolchain/embedded/lib/ruby/gems/2.6.0/gems/mixlib-install-3.11.18/lib/mixlib/install/backend/package_router.rb:126: warning: constant Net::HTTPServerException is deprecated
Failed to open TCP connection to packages.chef.io:443 (getaddrinfo: nodename nor servname provided, or not known)
[31m🚨 Error: The command exited with status 1[0m
```
|
1.0
|
omnibus-test.sh needs to try to install workstation multiple times - We often see timeouts installing workstation in the verification stage of the release pipeline. We should make sure we retry the installation so we don't fail a whole build just because omnitruck is flaky.
```
--- Installing unstable chef-workstation 0.10.33
/opt/omnibus-toolchain/embedded/lib/ruby/gems/2.6.0/gems/mixlib-install-3.11.18/lib/mixlib/install/backend/package_router.rb:126: warning: constant Net::HTTPServerException is deprecated
Failed to open TCP connection to packages.chef.io:443 (getaddrinfo: nodename nor servname provided, or not known)
[31m🚨 Error: The command exited with status 1[0m
```
|
test
|
omnibus test sh needs to try to install workstation multiple times we often see timeouts installing workstation in the verification stage of the release pipeline we should make sure we retry the installation so we don t fail a whole build just because omnitruck is flaky installing unstable chef workstation opt omnibus toolchain embedded lib ruby gems gems mixlib install lib mixlib install backend package router rb warning constant net httpserverexception is deprecated failed to open tcp connection to packages chef io getaddrinfo nodename nor servname provided or not known 🚨 error the command exited with status
| 1
|
27,157
| 6,813,838,564
|
IssuesEvent
|
2017-11-06 10:46:05
|
BTDF/DeploymentFramework
|
https://api.github.com/repos/BTDF/DeploymentFramework
|
closed
|
Issue: TerminateServiceInstances task fails on routing failure report
|
bug CodePlexMigrationInitiated General Impact: Medium Release 5.5
|
The TerminateServiceInstances MSBuild task indicates failure if it encounters a routing failure report that was previously auto-terminated by a linked service instance. When a routing failure report is linked to a service instance and that instance is terminated, the routing failure report terminates at the same time.
#### This work item was migrated from CodePlex
CodePlex work item ID: '10519'
Assigned to: 'tfabraham'
Vote count: '1'
|
1.0
|
Issue: TerminateServiceInstances task fails on routing failure report - The TerminateServiceInstances MSBuild task indicates failure if it encounters a routing failure report that was previously auto-terminated by a linked service instance. When a routing failure report is linked to a service instance and that instance is terminated, the routing failure report terminates at the same time.
#### This work item was migrated from CodePlex
CodePlex work item ID: '10519'
Assigned to: 'tfabraham'
Vote count: '1'
|
non_test
|
issue terminateserviceinstances task fails on routing failure report the terminateserviceinstances msbuild task indicates failure if it encounters a routing failure report that was previously auto terminated by a linked service instance when a routing failure report is linked to a service instance and that instance is terminated the routing failure report terminates at the same time this work item was migrated from codeplex codeplex work item id assigned to tfabraham vote count
| 0
|
18,941
| 10,271,217,743
|
IssuesEvent
|
2019-08-23 13:38:12
|
SciTools/iris
|
https://api.github.com/repos/SciTools/iris
|
closed
|
Reading netcdf files is slow if there are unlimited dimensions
|
SemVer: Minor Type: Performance
|
Reading arrays from NetCDF files is slow if one dimension is unlimited.
An example: reading an array of shape `(5000, 50)` takes ~7 s if the first dimension is unlimited. If both dimensions are fixed, it takes ~0.02 s. This is a major bottleneck if many (100s) of such files need to be processed. Time dimension is often declared unlimited in files generated by circulation models.
Test case:
```
import iris
import time
f = 'example_dataset.nc'
var = 'sea_water_practical_salinity'
tic = time.process_time()
cube = iris.load_cube(f, var)
cube.data
duration = time.process_time() - tic
print('Duration {:.3f} s'.format(duration))
```
The input NetCDF file can be generated with:
```
import iris
import numpy
import datetime
ntime = 5000
nz = 50
dt = 600.
time = numpy.arange(ntime, dtype=float)*dt
date_zero = datetime.datetime(2000, 1, 1)
date_epoch = datetime.datetime.utcfromtimestamp(0)
time_epoch = time + (date_zero - date_epoch).total_seconds()
z = numpy.linspace(0, 10, nz)
values = 5*numpy.sin(time/(14*24*3600.))
values = numpy.tile(values, (nz, 1)).T
time_dim = iris.coords.DimCoord(time_epoch, standard_name='time',
units='seconds since 1970-01-01 00:00:00-00')
z_dim = iris.coords.DimCoord(z, standard_name='depth', units='m')
cube = iris.cube.Cube(values)
cube.standard_name = 'sea_water_practical_salinity'
cube.units = '1'
cube.add_dim_coord(time_dim, 0)
cube.add_dim_coord(z_dim, 1)
iris.fileformats.netcdf.save(cube, 'example_dataset.nc',
unlimited_dimensions=['time'])
```
Profiling suggest that in the unlimited case, each time slice is being read separately, i.e. `NetCDFDataProxy.__getitem__` is being called 5000 times.
Tested with: iris version 2.2.0, Anaconda3 2019.03
|
True
|
Reading netcdf files is slow if there are unlimited dimensions - Reading arrays from NetCDF files is slow if one dimension is unlimited.
An example: reading an array of shape `(5000, 50)` takes ~7 s if the first dimension is unlimited. If both dimensions are fixed, it takes ~0.02 s. This is a major bottleneck if many (100s) of such files need to be processed. Time dimension is often declared unlimited in files generated by circulation models.
Test case:
```
import iris
import time
f = 'example_dataset.nc'
var = 'sea_water_practical_salinity'
tic = time.process_time()
cube = iris.load_cube(f, var)
cube.data
duration = time.process_time() - tic
print('Duration {:.3f} s'.format(duration))
```
The input NetCDF file can be generated with:
```
import iris
import numpy
import datetime
ntime = 5000
nz = 50
dt = 600.
time = numpy.arange(ntime, dtype=float)*dt
date_zero = datetime.datetime(2000, 1, 1)
date_epoch = datetime.datetime.utcfromtimestamp(0)
time_epoch = time + (date_zero - date_epoch).total_seconds()
z = numpy.linspace(0, 10, nz)
values = 5*numpy.sin(time/(14*24*3600.))
values = numpy.tile(values, (nz, 1)).T
time_dim = iris.coords.DimCoord(time_epoch, standard_name='time',
units='seconds since 1970-01-01 00:00:00-00')
z_dim = iris.coords.DimCoord(z, standard_name='depth', units='m')
cube = iris.cube.Cube(values)
cube.standard_name = 'sea_water_practical_salinity'
cube.units = '1'
cube.add_dim_coord(time_dim, 0)
cube.add_dim_coord(z_dim, 1)
iris.fileformats.netcdf.save(cube, 'example_dataset.nc',
unlimited_dimensions=['time'])
```
Profiling suggest that in the unlimited case, each time slice is being read separately, i.e. `NetCDFDataProxy.__getitem__` is being called 5000 times.
Tested with: iris version 2.2.0, Anaconda3 2019.03
|
non_test
|
reading netcdf files is slow if there are unlimited dimensions reading arrays from netcdf files is slow if one dimension is unlimited an example reading an array of shape takes s if the first dimension is unlimited if both dimensions are fixed it takes s this is a major bottleneck if many of such files need to be processed time dimension is often declared unlimited in files generated by circulation models test case import iris import time f example dataset nc var sea water practical salinity tic time process time cube iris load cube f var cube data duration time process time tic print duration s format duration the input netcdf file can be generated with import iris import numpy import datetime ntime nz dt time numpy arange ntime dtype float dt date zero datetime datetime date epoch datetime datetime utcfromtimestamp time epoch time date zero date epoch total seconds z numpy linspace nz values numpy sin time values numpy tile values nz t time dim iris coords dimcoord time epoch standard name time units seconds since z dim iris coords dimcoord z standard name depth units m cube iris cube cube values cube standard name sea water practical salinity cube units cube add dim coord time dim cube add dim coord z dim iris fileformats netcdf save cube example dataset nc unlimited dimensions profiling suggest that in the unlimited case each time slice is being read separately i e netcdfdataproxy getitem is being called times tested with iris version
| 0
|
802,019
| 28,566,082,374
|
IssuesEvent
|
2023-04-21 02:18:24
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
opened
|
Support Google Managed Prometheus artifacts
|
type: feature request priority: p3
|
### Description
Add an observability configuration (see #1026 to learn more about configurations) that enables a GKE cluster to report metrics to prometheus (see [terraform documentation][1]).
[1]: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#nested_managed_prometheus
|
1.0
|
Support Google Managed Prometheus artifacts - ### Description
Add an observability configuration (see #1026 to learn more about configurations) that enables a GKE cluster to report metrics to prometheus (see [terraform documentation][1]).
[1]: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#nested_managed_prometheus
|
non_test
|
support google managed prometheus artifacts description add an observability configuration see to learn more about configurations that enables a gke cluster to report metrics to prometheus see
| 0
|
395
| 2,846,514,934
|
IssuesEvent
|
2015-05-29 12:02:40
|
deconst/deconst-docs
|
https://api.github.com/repos/deconst/deconst-docs
|
opened
|
Extract the node.js environment configuration module
|
content service presenter refactoring
|
I think I prefer the one in the presenter a bit more than the one in the content service.
An alternate (better) option would be find an npm module that already does this. My Google-fu couldn't turn one up back when we were first getting started. Maybe @kenperkins knows one that does what we need?
|
1.0
|
Extract the node.js environment configuration module - I think I prefer the one in the presenter a bit more than the one in the content service.
An alternate (better) option would be find an npm module that already does this. My Google-fu couldn't turn one up back when we were first getting started. Maybe @kenperkins knows one that does what we need?
|
non_test
|
extract the node js environment configuration module i think i prefer the one in the presenter a bit more than the one in the content service an alternate better option would be find an npm module that already does this my google fu couldn t turn one up back when we were first getting started maybe kenperkins knows one that does what we need
| 0
|
229,571
| 18,372,799,899
|
IssuesEvent
|
2021-10-11 03:18:27
|
dotnet/machinelearning-modelbuilder
|
https://api.github.com/repos/dotnet/machinelearning-modelbuilder
|
opened
|
The notebook file is not displayed under .Net472 project after added from Consume page.
|
Priority:2 Test Team
|
**System Information (please complete the following information):**
- Windows OS: Windows-10-Enterprise-21H1
- Microsoft Visual Studio Enterprise 2022 Preview: 17.0.0 Preview 4.1
- ML.Net Model Builder: 16.7.7.2150601 (Main Branch)
- Notebook Editor [Preview]: 0.2.0.2150901
- Microsoft.dotnet-interactive: 1.0.250604
**Describe the bug**
- On which step of the process did you run into an issue: not see the notebook file under .Net472 project in VS2022 after added from Consume page.
- Clear description of the problem: Can see that the .ipynb file is added successfully from project folder, but not display in VS2022.
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio 2022 start window;
2. Choose the C# Console App (.NET Framework) project template with .Net Framework472;
3. Add model builder by right click on the project;
4. Choose Data classification to complete training;
5. Navigate to Consume page and add one Notebook to solution;
6. See that the notebook file is not displayed under the solution.
**Expected behavior**
The notebook file should be displayed under .Net472 project after added from Consume page.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Can display the Notebook file adding from: right click the .net472 project>Add>New Item...>Notebook.

|
1.0
|
The notebook file is not displayed under .Net472 project after added from Consume page. - **System Information (please complete the following information):**
- Windows OS: Windows-10-Enterprise-21H1
- Microsoft Visual Studio Enterprise 2022 Preview: 17.0.0 Preview 4.1
- ML.Net Model Builder: 16.7.7.2150601 (Main Branch)
- Notebook Editor [Preview]: 0.2.0.2150901
- Microsoft.dotnet-interactive: 1.0.250604
**Describe the bug**
- On which step of the process did you run into an issue: not see the notebook file under .Net472 project in VS2022 after added from Consume page.
- Clear description of the problem: Can see that the .ipynb file is added successfully from project folder, but not display in VS2022.
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio 2022 start window;
2. Choose the C# Console App (.NET Framework) project template with .Net Framework472;
3. Add model builder by right click on the project;
4. Choose Data classification to complete training;
5. Navigate to Consume page and add one Notebook to solution;
6. See that the notebook file is not displayed under the solution.
**Expected behavior**
The notebook file should be displayed under .Net472 project after added from Consume page.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Can display the Notebook file adding from: right click the .net472 project>Add>New Item...>Notebook.

|
test
|
the notebook file is not displayed under project after added from consume page system information please complete the following information windows os windows enterprise microsoft visual studio enterprise preview preview ml net model builder main branch notebook editor microsoft dotnet interactive describe the bug on which step of the process did you run into an issue not see the notebook file under project in after added from consume page clear description of the problem can see that the ipynb file is added successfully from project folder but not display in to reproduce steps to reproduce the behavior select create a new project from the visual studio start window choose the c console app net framework project template with net add model builder by right click on the project choose data classification to complete training navigate to consume page and add one notebook to solution see that the notebook file is not displayed under the solution expected behavior the notebook file should be displayed under project after added from consume page screenshots if applicable add screenshots to help explain your problem additional context can display the notebook file adding from right click the project add new item notebook
| 1
|
72,708
| 31,769,020,441
|
IssuesEvent
|
2023-09-12 10:32:16
|
gauravrs18/issue_onboarding
|
https://api.github.com/repos/gauravrs18/issue_onboarding
|
closed
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-approve-button-component
|
CX-account-services
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-approve-button-component
|
1.0
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-approve-button-component - dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-approve-button-component
|
non_test
|
dev angular code account services new connection component approve component consumer details component connect component approve button component dev angular code account services new connection component approve component consumer details component connect component approve button component
| 0
|
47,563
| 5,902,428,297
|
IssuesEvent
|
2017-05-19 01:20:41
|
AffiliateWP/AffiliateWP
|
https://api.github.com/repos/AffiliateWP/AffiliateWP
|
closed
|
Don't apply the GMT offset when adding affiliates with a specific registration date
|
bug Unit Tests
|
In #2218 we added the ability to actually set the `date_registered` value when adding new affiliates.
After some discussion related to #338, we've decided not to enforce applying the `gmt_offset` value against specified dates when adding affiliates. The thinking is that if you're specifying a date, you're expecting that date on the other end and applying a GMT offset might adjust the date/time in unexpected ways.
|
1.0
|
Don't apply the GMT offset when adding affiliates with a specific registration date - In #2218 we added the ability to actually set the `date_registered` value when adding new affiliates.
After some discussion related to #338, we've decided not to enforce applying the `gmt_offset` value against specified dates when adding affiliates. The thinking is that if you're specifying a date, you're expecting that date on the other end and applying a GMT offset might adjust the date/time in unexpected ways.
|
test
|
don t apply the gmt offset when adding affiliates with a specific registration date in we added the ability to actually set the date registered value when adding new affiliates after some discussion related to we ve decided not to enforce applying the gmt offset value against specified dates when adding affiliates the thinking is that if you re specifying a date you re expecting that date on the other end and applying a gmt offset might adjust the date time in unexpected ways
| 1
|
31,593
| 25,915,088,214
|
IssuesEvent
|
2022-12-15 16:43:28
|
SonarSource/sonarlint-visualstudio
|
https://api.github.com/repos/SonarSource/sonarlint-visualstudio
|
opened
|
[Infra] Add automated release-time check that SBOM files are published as part of the release
|
Infrastructure
|
### Description
i.e. an automated check to prevent bugs like #3469
|
1.0
|
[Infra] Add automated release-time check that SBOM files are published as part of the release - ### Description
i.e. an automated check to prevent bugs like #3469
|
non_test
|
add automated release time check that sbom files are published as part of the release description i e an automated check to prevent bugs like
| 0
|
42,744
| 5,532,595,932
|
IssuesEvent
|
2017-03-21 11:02:24
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
closed
|
EULA/PP links are not aligned for incompatible with your platform add-ons
|
component: redesign
|
Steps to reproduce:
1. Load details page for an add-on incompatible with your platform https://addons-dev.allizom.org/en-US/firefox/addon/fireftp/?src=userprofile with Privacy Policy or EULA links
Expected results:
There are no display or layout issues.
Actual results:
The links are not aligned with the rest of the text.
Notes/Issues:
Verified on FF50(Win 7). Issue is reproducing on AMO-stage and -dev.
Screenshot for this issue:

|
1.0
|
EULA/PP links are not aligned for incompatible with your platform add-ons - Steps to reproduce:
1. Load details page for an add-on incompatible with your platform https://addons-dev.allizom.org/en-US/firefox/addon/fireftp/?src=userprofile with Privacy Policy or EULA links
Expected results:
There are no display or layout issues.
Actual results:
The links are not aligned with the rest of the text.
Notes/Issues:
Verified on FF50(Win 7). Issue is reproducing on AMO-stage and -dev.
Screenshot for this issue:

|
non_test
|
eula pp links are not aligned for incompatible with your platform add ons steps to reproduce load details page for an add on incompatible with your platform with privacy policy or eula links expected results there are no display or layout issues actual results the links are not aligned with the rest of the text notes issues verified on win issue is reproducing on amo stage and dev screenshot for this issue
| 0
|
167,531
| 13,033,679,210
|
IssuesEvent
|
2020-07-28 07:24:27
|
SAP/ui5-webcomponents
|
https://api.github.com/repos/SAP/ui5-webcomponents
|
closed
|
Carousel: can't click under navigation placeholder
|
RC8 Testing
|
**Describe the bug**
Navigation arrow placeholder blocks click interaction in some cases (arrows-placement="Content").
**To reproduce**
Steps to reproduce the behavior:
1. Go to https://sap.github.io/ui5-webcomponents/master/playground/components/Carousel/
2. Go to the sample "Carousel With Multiple Items per Page"
3. Try to click around navigation arrows.
**Expected behavior**
Users should be able to click under the placeholder
|
1.0
|
Carousel: can't click under navigation placeholder - **Describe the bug**
Navigation arrow placeholder blocks click interaction in some cases (arrows-placement="Content").
**To reproduce**
Steps to reproduce the behavior:
1. Go to https://sap.github.io/ui5-webcomponents/master/playground/components/Carousel/
2. Go to the sample "Carousel With Multiple Items per Page"
3. Try to click around navigation arrows.
**Expected behavior**
Users should be able to click under the placeholder
|
test
|
carousel can t click under navigation placeholder describe the bug navigation arrow placeholder blocks click interaction in some cases arrows placement content to reproduce steps to reproduce the behavior go to go to the sample carousel with multiple items per page try to click around navigation arrows expected behavior users should be able to click under the placeholder
| 1
|
293,410
| 25,289,959,949
|
IssuesEvent
|
2022-11-16 22:58:04
|
rancher/qa-tasks
|
https://api.github.com/repos/rancher/qa-tasks
|
closed
|
Hosted cluster Additional Test cases
|
team/area2 [zube]: QA Next up area/automation-test
|
For AKS/GKE cluster
- [ ] Edit and add node pools
- [ ] Edit and upgrade k8s
- [ ] Import clusters + Edit and make changes
- [ ] Deploy Private clusters
|
1.0
|
Hosted cluster Additional Test cases - For AKS/GKE cluster
- [ ] Edit and add node pools
- [ ] Edit and upgrade k8s
- [ ] Import clusters + Edit and make changes
- [ ] Deploy Private clusters
|
test
|
hosted cluster additional test cases for aks gke cluster edit and add node pools edit and upgrade import clusters edit and make changes deploy private clusters
| 1
|
521,394
| 15,109,008,799
|
IssuesEvent
|
2021-02-08 17:18:13
|
pokt-network/pocket-dashboard
|
https://api.github.com/repos/pokt-network/pocket-dashboard
|
closed
|
Feature Request - Next Block Countdown
|
Backlog Low Priority
|
I think it would be helpful to have a 5th block added to the network information page showing an estimated "block countdown" for when the next block would be proposed. similar to how they have it for [OAN(aion)](https://mastery.theoan.com/#/dashboard)
|
1.0
|
Feature Request - Next Block Countdown - I think it would be helpful to have a 5th block added to the network information page showing an estimated "block countdown" for when the next block would be proposed. similar to how they have it for [OAN(aion)](https://mastery.theoan.com/#/dashboard)
|
non_test
|
feature request next block countdown i think it would be helpful to have a block added to the network information page showing an estimated block countdown for when the next block would be proposed similar to how they have it for
| 0
|
174,517
| 13,493,294,058
|
IssuesEvent
|
2020-09-11 19:25:09
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
opened
|
Test Failure: com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks
|
team:Zombie Apocalypse test bug
|
Test Failure (20180220-2343): com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks
```
junit.framework.AssertionFailedError: 2018-02-21-08:02:14:613 Missing success message in output. PersistentExecutorsTestServlet is starting testTasksAreRunning<br>
<pre>ERROR in testTasksAreRunning:
java.lang.Exception: Task did not complete any executions within alotted interval. TaskStatus[1]@cc6f0b90 SCHEDULED,UNATTEMPTED 2018/02/21-08:01:06.734-UTC[1519200066734]
at web.PersistentExecutorsTestServlet.testTasksAreRunning(PersistentExecutorsTestServlet.java:197)
at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1255)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:743)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:440)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1154)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:4962)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:314)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:995)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1009)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:412)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:371)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:530)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:464)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:329)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:300)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:165)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:74)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:501)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:571)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:926)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1015)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:785)
</pre>
at com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.runInServlet(InitialPollingTest.java:84)
at com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks(InitialPollingTest.java:178)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:176)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:314)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:153)
```
From the logs, the tasks are sometimes completed a bit slower than the test case has tolerance to wait for,
```
[7/28/20 8:11:23:612 GMT] 00000028 id=d360823d com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[4] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 0000002c id=970979c0 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[2] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 00000029 id=2afc1ca2 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[3] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 0000002b id=e0371265 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[1] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:35:367 GMT] 00000028 SystemOut O Task 4 execution attempt #1
[7/28/20 8:11:35:373 GMT] 00000029 SystemOut O Task 3 execution attempt #1
[7/28/20 8:11:39:220 GMT] 0000002b SystemOut O Task 1 execution attempt #1
[7/28/20 8:11:39:222 GMT] 0000002c SystemOut O Task 2 execution attempt #1
[7/28/20 8:11:39:225 GMT] 00000027 id=dc614c18 ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl < getStatus Exit
TaskStatus[1]@356163d4 SCHEDULED,UNATTEMPTED 2020/07/28-08:10:50.043-GMT[1595923850043]
[7/28/20 8:11:39:225 GMT] 00000027 id=356163d4 com.ibm.ws.concurrent.persistent.internal.TaskStatusImpl > hasResult Entry
[7/28/20 8:11:39:225 GMT] 00000027 id=356163d4 com.ibm.ws.concurrent.persistent.internal.TaskStatusImpl < hasResult Exit
false
[7/28/20 8:11:39:226 GMT] 00000027 id=00000000 SystemOut O <----- testTasksAreRunning(invoked by testRestartWithFourTasks-5) failed:
[7/28/20 8:11:39:226 GMT] 00000027 id=00000000 SystemOut O java.lang.Exception: Task did not complete any executions within alotted interval. TaskStatus[1]@356163d4 SCHEDULED,UNATTEMPTED 2020/07/28-08:10:50.043-GMT[1595923850043]
[7/28/20 8:11:39:447 GMT] 0000002b id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:482 GMT] 00000029 id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:488 GMT] 00000028 id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:489 GMT] 0000002c id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:490 GMT] 0000002b id=e0371265 com.ibm.ws.concurrent.persistent.internal.InvokerTask < run[1] Exit
null
```
The test case needs to be updated to allow for the possibility that the tasks will sometimes take longer. I'll increase it to the same general maximum interval that is used for other tests.
|
1.0
|
Test Failure: com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks - Test Failure (20180220-2343): com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks
```
junit.framework.AssertionFailedError: 2018-02-21-08:02:14:613 Missing success message in output. PersistentExecutorsTestServlet is starting testTasksAreRunning<br>
<pre>ERROR in testTasksAreRunning:
java.lang.Exception: Task did not complete any executions within alotted interval. TaskStatus[1]@cc6f0b90 SCHEDULED,UNATTEMPTED 2018/02/21-08:01:06.734-UTC[1519200066734]
at web.PersistentExecutorsTestServlet.testTasksAreRunning(PersistentExecutorsTestServlet.java:197)
at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1255)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:743)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:440)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1154)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:4962)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:314)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:995)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1009)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:412)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:371)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:530)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:464)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:329)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:300)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:165)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:74)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:501)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:571)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:926)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1015)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:785)
</pre>
at com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.runInServlet(InitialPollingTest.java:84)
at com.ibm.ws.concurrent.persistent.fat.initial.polling.InitialPollingTest.testRestartWithFourTasks(InitialPollingTest.java:178)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:176)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:314)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:153)
```
From the logs, the tasks are sometimes completed a bit slower than the test case has tolerance to wait for,
```
[7/28/20 8:11:23:612 GMT] 00000028 id=d360823d com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[4] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 0000002c id=970979c0 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[2] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 00000029 id=2afc1ca2 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[3] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:23:612 GMT] 0000002b id=e0371265 com.ibm.ws.concurrent.persistent.internal.InvokerTask > run[1] Entry
com.ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl@dc614c18
[7/28/20 8:11:35:367 GMT] 00000028 SystemOut O Task 4 execution attempt #1
[7/28/20 8:11:35:373 GMT] 00000029 SystemOut O Task 3 execution attempt #1
[7/28/20 8:11:39:220 GMT] 0000002b SystemOut O Task 1 execution attempt #1
[7/28/20 8:11:39:222 GMT] 0000002c SystemOut O Task 2 execution attempt #1
[7/28/20 8:11:39:225 GMT] 00000027 id=dc614c18 ibm.ws.concurrent.persistent.internal.PersistentExecutorImpl < getStatus Exit
TaskStatus[1]@356163d4 SCHEDULED,UNATTEMPTED 2020/07/28-08:10:50.043-GMT[1595923850043]
[7/28/20 8:11:39:225 GMT] 00000027 id=356163d4 com.ibm.ws.concurrent.persistent.internal.TaskStatusImpl > hasResult Entry
[7/28/20 8:11:39:225 GMT] 00000027 id=356163d4 com.ibm.ws.concurrent.persistent.internal.TaskStatusImpl < hasResult Exit
false
[7/28/20 8:11:39:226 GMT] 00000027 id=00000000 SystemOut O <----- testTasksAreRunning(invoked by testRestartWithFourTasks-5) failed:
[7/28/20 8:11:39:226 GMT] 00000027 id=00000000 SystemOut O java.lang.Exception: Task did not complete any executions within alotted interval. TaskStatus[1]@356163d4 SCHEDULED,UNATTEMPTED 2020/07/28-08:10:50.043-GMT[1595923850043]
[7/28/20 8:11:39:447 GMT] 0000002b id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:482 GMT] 00000029 id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:488 GMT] 00000028 id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:489 GMT] 0000002c id=35750f13 com.ibm.ws.concurrent.persistent.db.DatabaseTaskStore < persist Exit
true
[7/28/20 8:11:39:490 GMT] 0000002b id=e0371265 com.ibm.ws.concurrent.persistent.internal.InvokerTask < run[1] Exit
null
```
The test case needs to be updated to allow for the possibility that the tasks will sometimes take longer. I'll increase it to the same general maximum interval that is used for other tests.
|
test
|
test failure com ibm ws concurrent persistent fat initial polling initialpollingtest testrestartwithfourtasks test failure com ibm ws concurrent persistent fat initial polling initialpollingtest testrestartwithfourtasks junit framework assertionfailederror missing success message in output persistentexecutorstestservlet is starting testtasksarerunning error in testtasksarerunning java lang exception task did not complete any executions within alotted interval taskstatus scheduled unattempted utc at web persistentexecutorstestservlet testtasksarerunning persistentexecutorstestservlet java at web persistentexecutorstestservlet doget persistentexecutorstestservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at com ibm ws webcontainer servlet servletwrapper service servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer webapp webapp handlerequest webapp java at com ibm ws webcontainer osgi dynamicvirtualhost handlerequest dynamicvirtualhost java at com ibm ws webcontainer webcontainer handlerequest webcontainer java at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink wraphandlerandexecute httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink ready httpdispatcherlink java at com ibm ws http channel internal inbound httpinboundlink handlediscrimination httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink handlenewrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink processrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink ready httpinboundlink java at com ibm ws tcpchannel internal newconnectioninitialreadcallback sendtodiscriminators newconnectioninitialreadcallback java at com ibm ws tcpchannel internal newconnectioninitialreadcallback complete newconnectioninitialreadcallback java at com ibm ws tcpchannel internal workqueuemanager requestcomplete workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager attemptio workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager workerrun workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager worker run workqueuemanager java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com ibm ws concurrent persistent fat initial polling initialpollingtest runinservlet initialpollingtest java at com ibm ws concurrent persistent fat initial polling initialpollingtest testrestartwithfourtasks initialpollingtest java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner run fatrunner java from the logs the tasks are sometimes completed a bit slower than the test case has tolerance to wait for id com ibm ws concurrent persistent internal invokertask run entry com ibm ws concurrent persistent internal persistentexecutorimpl id com ibm ws concurrent persistent internal invokertask run entry com ibm ws concurrent persistent internal persistentexecutorimpl id com ibm ws concurrent persistent internal invokertask run entry com ibm ws concurrent persistent internal persistentexecutorimpl id com ibm ws concurrent persistent internal invokertask run entry com ibm ws concurrent persistent internal persistentexecutorimpl systemout o task execution attempt systemout o task execution attempt systemout o task execution attempt systemout o task execution attempt id ibm ws concurrent persistent internal persistentexecutorimpl getstatus exit taskstatus scheduled unattempted gmt id com ibm ws concurrent persistent internal taskstatusimpl hasresult entry id com ibm ws concurrent persistent internal taskstatusimpl hasresult exit false id systemout o testtasksarerunning invoked by testrestartwithfourtasks failed id systemout o java lang exception task did not complete any executions within alotted interval taskstatus scheduled unattempted gmt id com ibm ws concurrent persistent db databasetaskstore persist exit true id com ibm ws concurrent persistent db databasetaskstore persist exit true id com ibm ws concurrent persistent db databasetaskstore persist exit true id com ibm ws concurrent persistent db databasetaskstore persist exit true id com ibm ws concurrent persistent internal invokertask run exit null the test case needs to be updated to allow for the possibility that the tasks will sometimes take longer i ll increase it to the same general maximum interval that is used for other tests
| 1
|
441,390
| 12,717,016,883
|
IssuesEvent
|
2020-06-24 03:46:26
|
woocommerce/woocommerce-gateway-stripe
|
https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe
|
closed
|
Apple Pay charging incorrect amount when there's `'` or `"` in variation attribute name
|
LPMs/APMs Priority: Low [Type] Bug
|
**Affected tickets**
2666923-zen
**Describe the bug**
When a product has multiple variations at different prices, and the variation attribute name has `'` or `"` characters in it, Apple Pay will always charge the price of the cheapest variation, no matter which variation I've selected. This problem doesn't happen for products that only have alphanumeric characters in variation names.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a variable product with one attribute and at least 2 values that have `'` or `"` in their name, for example `3'4"`, `3'6"`:

Screenshot: https://d.pr/i/TJDR9i
2. Create variations out of this attribute and assign different price to each.
3. Go to the product page and select a variation that is not the cheapest one:

Screenshot: https://d.pr/i/itwtIF
4. Click the Apple Pay button and observe we're instead being charged the price of the cheapest variation, even if the selected one is different:

Screenshot: https://d.pr/i/M7M5YF
**Expected behavior**
I am charged the price of the variation I've selected.
**Screenshots**
See above
**Environment (please complete the following information):**
- WordPress Version: 5.3.2
- WooCommerce Version 3.9.0
- Stripe Plugin Version 4.3.1
- Browser [e.g. chrome, safari] and Version: Safari 13
- Any other plugins installed
|
1.0
|
Apple Pay charging incorrect amount when there's `'` or `"` in variation attribute name - **Affected tickets**
2666923-zen
**Describe the bug**
When a product has multiple variations at different prices, and the variation attribute name has `'` or `"` characters in it, Apple Pay will always charge the price of the cheapest variation, no matter which variation I've selected. This problem doesn't happen for products that only have alphanumeric characters in variation names.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a variable product with one attribute and at least 2 values that have `'` or `"` in their name, for example `3'4"`, `3'6"`:

Screenshot: https://d.pr/i/TJDR9i
2. Create variations out of this attribute and assign different price to each.
3. Go to the product page and select a variation that is not the cheapest one:

Screenshot: https://d.pr/i/itwtIF
4. Click the Apple Pay button and observe we're instead being charged the price of the cheapest variation, even if the selected one is different:

Screenshot: https://d.pr/i/M7M5YF
**Expected behavior**
I am charged the price of the variation I've selected.
**Screenshots**
See above
**Environment (please complete the following information):**
- WordPress Version: 5.3.2
- WooCommerce Version 3.9.0
- Stripe Plugin Version 4.3.1
- Browser [e.g. chrome, safari] and Version: Safari 13
- Any other plugins installed
|
non_test
|
apple pay charging incorrect amount when there s or in variation attribute name affected tickets zen describe the bug when a product has multiple variations at different prices and the variation attribute name has or characters in it apple pay will always charge the price of the cheapest variation no matter which variation i ve selected this problem doesn t happen for products that only have alphanumeric characters in variation names to reproduce steps to reproduce the behavior create a variable product with one attribute and at least values that have or in their name for example screenshot create variations out of this attribute and assign different price to each go to the product page and select a variation that is not the cheapest one screenshot click the apple pay button and observe we re instead being charged the price of the cheapest variation even if the selected one is different screenshot expected behavior i am charged the price of the variation i ve selected screenshots see above environment please complete the following information wordpress version woocommerce version stripe plugin version browser and version safari any other plugins installed
| 0
|
50,292
| 6,346,337,306
|
IssuesEvent
|
2017-07-28 01:42:04
|
httpwg/http-extensions
|
https://api.github.com/repos/httpwg/http-extensions
|
closed
|
Enabling O(1) removal from digest
|
cache-digest design
|
Current spec is using Golomb-coded sets as the algorithm to create digests.
While they show great space-efficiency, Golomb-coded sets do not enable O(1) removal from the digest, which means from a browser implementation perspective, the browser would have to calculate the hash for each host upon connection creation.
That poses a couple of issues from an implementation perspective:
* Calculating the hash on each connection establishment may be expensive. That part seems inherent to the algorithm and not likely to be optimized away.
* Calculating the hash requires per-host indexing. That part is just a limitation of many current cache implementations.
A cache digest algorithm that enables O(1) removal (as well as addition) to the digest would enable us to move away from those limitations:
* Browsers can calculate a per-host digest once, then keep updating it as resources are added to the cache as well as when resources are removed from the cache. No need for per-host indexing.
- In order to do that, browsers would need to persist digests along with the cache
* Upon connection establishment, the browser can just send the ready-made digest to the server. Win!
During the HTTPWS, counting bloom filters were mentioned as an O(1) removal algorithm, but they are extremely inefficient when it comes to space. (~4 times bigger than bloom filters)
Turns out, [Cuckoo filters](https://www.cs.cmu.edu/~binfan/papers/login_cuckoofilter.pdf) enable O(1) removal while being more space efficient than bloom filters. While they are slightly bigger than Golomb-coded sets based digests, the cheaper runtime costs can make up for that deficiency.
/cc @kazuho @mnot @cbentzel @mcmanus
|
1.0
|
Enabling O(1) removal from digest - Current spec is using Golomb-coded sets as the algorithm to create digests.
While they show great space-efficiency, Golomb-coded sets do not enable O(1) removal from the digest, which means from a browser implementation perspective, the browser would have to calculate the hash for each host upon connection creation.
That poses a couple of issues from an implementation perspective:
* Calculating the hash on each connection establishment may be expensive. That part seems inherent to the algorithm and not likely to be optimized away.
* Calculating the hash requires per-host indexing. That part is just a limitation of many current cache implementations.
A cache digest algorithm that enables O(1) removal (as well as addition) to the digest would enable us to move away from those limitations:
* Browsers can calculate a per-host digest once, then keep updating it as resources are added to the cache as well as when resources are removed from the cache. No need for per-host indexing.
- In order to do that, browsers would need to persist digests along with the cache
* Upon connection establishment, the browser can just send the ready-made digest to the server. Win!
During the HTTPWS, counting bloom filters were mentioned as an O(1) removal algorithm, but they are extremely inefficient when it comes to space. (~4 times bigger than bloom filters)
Turns out, [Cuckoo filters](https://www.cs.cmu.edu/~binfan/papers/login_cuckoofilter.pdf) enable O(1) removal while being more space efficient than bloom filters. While they are slightly bigger than Golomb-coded sets based digests, the cheaper runtime costs can make up for that deficiency.
/cc @kazuho @mnot @cbentzel @mcmanus
|
non_test
|
enabling o removal from digest current spec is using golomb coded sets as the algorithm to create digests while they show great space efficiency golomb coded sets do not enable o removal from the digest which means from a browser implementation perspective the browser would have to calculate the hash for each host upon connection creation that poses a couple of issues from an implementation perspective calculating the hash on each connection establishment may be expensive that part seems inherent to the algorithm and not likely to be optimized away calculating the hash requires per host indexing that part is just a limitation of many current cache implementations a cache digest algorithm that enables o removal as well as addition to the digest would enable us to move away from those limitations browsers can calculate a per host digest once then keep updating it as resources are added to the cache as well as when resources are removed from the cache no need for per host indexing in order to do that browsers would need to persist digests along with the cache upon connection establishment the browser can just send the ready made digest to the server win during the httpws counting bloom filters were mentioned as an o removal algorithm but they are extremely inefficient when it comes to space times bigger than bloom filters turns out enable o removal while being more space efficient than bloom filters while they are slightly bigger than golomb coded sets based digests the cheaper runtime costs can make up for that deficiency cc kazuho mnot cbentzel mcmanus
| 0
|
819,236
| 30,724,802,055
|
IssuesEvent
|
2023-07-27 18:43:52
|
spiffe/spire
|
https://api.github.com/repos/spiffe/spire
|
opened
|
Graduate localauthority api into main
|
priority/backlog
|
Cherry pick local authority changes on next branch into main branch on spire-api-sdk repository
|
1.0
|
Graduate localauthority api into main - Cherry pick local authority changes on next branch into main branch on spire-api-sdk repository
|
non_test
|
graduate localauthority api into main cherry pick local authority changes on next branch into main branch on spire api sdk repository
| 0
|
80,782
| 3,574,390,169
|
IssuesEvent
|
2016-01-27 11:35:14
|
salesagility/SuiteCRM
|
https://api.github.com/repos/salesagility/SuiteCRM
|
closed
|
Studio layout view issue - Changing panel to tab
|
bug High Priority
|
When I try to change the display type dropdown to change a panel to a tab on the layout view in Studio I can't click to select it. However, if I use the arrow keys and enter I can change the value (maybe a javascript bug?). This occurs with Firefox (v40) and SuiteCRM 7.3
|
1.0
|
Studio layout view issue - Changing panel to tab - When I try to change the display type dropdown to change a panel to a tab on the layout view in Studio I can't click to select it. However, if I use the arrow keys and enter I can change the value (maybe a javascript bug?). This occurs with Firefox (v40) and SuiteCRM 7.3
|
non_test
|
studio layout view issue changing panel to tab when i try to change the display type dropdown to change a panel to a tab on the layout view in studio i can t click to select it however if i use the arrow keys and enter i can change the value maybe a javascript bug this occurs with firefox and suitecrm
| 0
|
195,769
| 22,360,105,997
|
IssuesEvent
|
2022-06-15 19:34:38
|
videojs/videojs-overlay
|
https://api.github.com/repos/videojs/videojs-overlay
|
closed
|
CVE-2018-16487 (Medium) detected in lodash-4.17.10.tgz
|
security vulnerability
|
## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>
Dependency Hierarchy:
- conventional-changelog-cli-2.0.5.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/brightcove/videojs-overlay/commit/fbdf8372a96f7f26965d33fedec5089038e609dc">fbdf8372a96f7f26965d33fedec5089038e609dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution (lodash): 4.17.11</p>
<p>Direct dependency fix Resolution (conventional-changelog-cli): 2.0.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"conventional-changelog-cli","packageVersion":"2.0.5","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"conventional-changelog-cli:2.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0.7","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16487","vulnerabilityDetails":"A prototype pollution vulnerability was found in lodash \u003c4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-16487 (Medium) detected in lodash-4.17.10.tgz - ## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>
Dependency Hierarchy:
- conventional-changelog-cli-2.0.5.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/brightcove/videojs-overlay/commit/fbdf8372a96f7f26965d33fedec5089038e609dc">fbdf8372a96f7f26965d33fedec5089038e609dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution (lodash): 4.17.11</p>
<p>Direct dependency fix Resolution (conventional-changelog-cli): 2.0.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"conventional-changelog-cli","packageVersion":"2.0.5","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"conventional-changelog-cli:2.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0.7","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16487","vulnerabilityDetails":"A prototype pollution vulnerability was found in lodash \u003c4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href dependency hierarchy conventional changelog cli tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details a prototype pollution vulnerability was found in lodash where the functions merge mergewith and defaultsdeep can be tricked into adding or modifying properties of object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution conventional changelog cli isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree conventional changelog cli isminimumfixversionavailable true minimumfixversion isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails a prototype pollution vulnerability was found in lodash where the functions merge mergewith and defaultsdeep can be tricked into adding or modifying properties of object prototype vulnerabilityurl
| 0
|
219,640
| 17,103,589,643
|
IssuesEvent
|
2021-07-09 14:33:27
|
mozilla-mobile/focus-android
|
https://api.github.com/repos/mozilla-mobile/focus-android
|
closed
|
Intermittent UI test failures - toolbar/menu related
|
eng:intermittent-test eng:ui-test
|
### Firebase Test Run:
11 failures in UI tests: https://console.firebase.google.com/project/moz-fx-mobile-firebase-testlab/testlab/histories/bh.2b4ca2c7fdb2f0f0/matrices/6557861697145683071
Due to this commit: https://github.com/mozilla-mobile/focus-android/commit/c20a52c95914e74cf5f46d1cec9428b896721cc2
takeScreenshotOfTips
browserMenuItemsTest
shareTabTest
openPageInExternalAppTest
testVisitingMultipleSites
skipFirstRunOnboardingTest
firstRunOnboardingTest
trashButtonTest
deleteHistoryOnRestartTest
noNameShortcutTest
addPageToHomeScreenTest
### Stacktrace:
`No views in hierarchy found matching: with id: org.mozilla.focus.debug:id/menuView`
or
`androidx.test.uiautomator.UiObjectNotFoundException: UiSelector[RESOURCE_ID=org.mozilla.focus.debug:id/urlView]
`
see the Firebase logs
### Build: Main Debug 7/8
|
2.0
|
Intermittent UI test failures - toolbar/menu related - ### Firebase Test Run:
11 failures in UI tests: https://console.firebase.google.com/project/moz-fx-mobile-firebase-testlab/testlab/histories/bh.2b4ca2c7fdb2f0f0/matrices/6557861697145683071
Due to this commit: https://github.com/mozilla-mobile/focus-android/commit/c20a52c95914e74cf5f46d1cec9428b896721cc2
takeScreenshotOfTips
browserMenuItemsTest
shareTabTest
openPageInExternalAppTest
testVisitingMultipleSites
skipFirstRunOnboardingTest
firstRunOnboardingTest
trashButtonTest
deleteHistoryOnRestartTest
noNameShortcutTest
addPageToHomeScreenTest
### Stacktrace:
`No views in hierarchy found matching: with id: org.mozilla.focus.debug:id/menuView`
or
`androidx.test.uiautomator.UiObjectNotFoundException: UiSelector[RESOURCE_ID=org.mozilla.focus.debug:id/urlView]
`
see the Firebase logs
### Build: Main Debug 7/8
|
test
|
intermittent ui test failures toolbar menu related firebase test run failures in ui tests due to this commit takescreenshotoftips browsermenuitemstest sharetabtest openpageinexternalapptest testvisitingmultiplesites skipfirstrunonboardingtest firstrunonboardingtest trashbuttontest deletehistoryonrestarttest nonameshortcuttest addpagetohomescreentest stacktrace no views in hierarchy found matching with id org mozilla focus debug id menuview or androidx test uiautomator uiobjectnotfoundexception uiselector see the firebase logs build main debug
| 1
|
779,625
| 27,360,496,670
|
IssuesEvent
|
2023-02-27 15:33:39
|
VeriFIT/mata
|
https://api.github.com/repos/VeriFIT/mata
|
closed
|
Remove old noodlification from `Mata::Strings`
|
For:library Module:nfa Type:required Priority:low
|
As we have implemented and stabilized a newer version of noodlification, we have agreed on removing the old implementation of noodlification.
|
1.0
|
Remove old noodlification from `Mata::Strings` - As we have implemented and stabilized a newer version of noodlification, we have agreed on removing the old implementation of noodlification.
|
non_test
|
remove old noodlification from mata strings as we have implemented and stabilized a newer version of noodlification we have agreed on removing the old implementation of noodlification
| 0
|
174,358
| 13,484,616,034
|
IssuesEvent
|
2020-09-11 06:42:26
|
SenseNet/sn-client
|
https://api.github.com/repos/SenseNet/sn-client
|
opened
|
🧪 [E2E test] Document viewer thumbnails
|
hacktoberfest test
|
# 🧪E2E test cases
The scope of these tests is to ensure that the Document (preview) viewer thumbnails related features work as it is intended.

# Test case 1
## 😎 Role
All test should run as admin.
## 🧫 Purpose of the test
Clicking the thumbnail icon in the toolbar switches the toolbar section on and off
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Click on the Toggle thumbnails icon again
**Expected result:**
Thumbnail section is hidden
# Test case 2
## 🧫 Purpose of the test
Clickin on an item in the thumbnail list makes the chosen page the selected one
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Click on the second thumbnail in the list
**Expected result:**
Second page is the selected, page 2 is displayed as current page in the toolbar and the main scrolling area is scrolled to page 2.
# Test case 3
## 🧫 Purpose of the test
Scrolling in the thumbnail are works as it is intended
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Scroll down in the thumbnail section with 1000 pixels
**Expected result:**
Thumbnail number 6 should be the top one in the thumbnail section.
|
1.0
|
🧪 [E2E test] Document viewer thumbnails - # 🧪E2E test cases
The scope of these tests is to ensure that the Document (preview) viewer thumbnails related features work as it is intended.

# Test case 1
## 😎 Role
All test should run as admin.
## 🧫 Purpose of the test
Clicking the thumbnail icon in the toolbar switches the toolbar section on and off
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Click on the Toggle thumbnails icon again
**Expected result:**
Thumbnail section is hidden
# Test case 2
## 🧫 Purpose of the test
Clickin on an item in the thumbnail list makes the chosen page the selected one
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Click on the second thumbnail in the list
**Expected result:**
Second page is the selected, page 2 is displayed as current page in the toolbar and the main scrolling area is scrolled to page 2.
# Test case 3
## 🧫 Purpose of the test
Scrolling in the thumbnail are works as it is intended
## 🐾 Steps
1. Login with admin role
2. Click on 'Content' menuitem
3. Click on IT workspace in the tree
4. Click on Document library in the tree
5. Click on Chicago in the tree
6. Double click BusinessPlan.docx in the grid
7. Click on the Toggle thumbnails icon
**Expected result:**
Thumbnail section is opened and displayed
8. Scroll down in the thumbnail section with 1000 pixels
**Expected result:**
Thumbnail number 6 should be the top one in the thumbnail section.
|
test
|
🧪 document viewer thumbnails 🧪 test cases the scope of these tests is to ensure that the document preview viewer thumbnails related features work as it is intended test case 😎 role all test should run as admin 🧫 purpose of the test clicking the thumbnail icon in the toolbar switches the toolbar section on and off 🐾 steps login with admin role click on content menuitem click on it workspace in the tree click on document library in the tree click on chicago in the tree double click businessplan docx in the grid click on the toggle thumbnails icon expected result thumbnail section is opened and displayed click on the toggle thumbnails icon again expected result thumbnail section is hidden test case 🧫 purpose of the test clickin on an item in the thumbnail list makes the chosen page the selected one 🐾 steps login with admin role click on content menuitem click on it workspace in the tree click on document library in the tree click on chicago in the tree double click businessplan docx in the grid click on the toggle thumbnails icon expected result thumbnail section is opened and displayed click on the second thumbnail in the list expected result second page is the selected page is displayed as current page in the toolbar and the main scrolling area is scrolled to page test case 🧫 purpose of the test scrolling in the thumbnail are works as it is intended 🐾 steps login with admin role click on content menuitem click on it workspace in the tree click on document library in the tree click on chicago in the tree double click businessplan docx in the grid click on the toggle thumbnails icon expected result thumbnail section is opened and displayed scroll down in the thumbnail section with pixels expected result thumbnail number should be the top one in the thumbnail section
| 1
|
38,681
| 10,228,565,001
|
IssuesEvent
|
2019-08-17 03:41:33
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
closed
|
Installation issue: faidx
|
build-error
|
`faidx` has multiple runtime errors. PR incoming.
### Steps to reproduce the issue
```console
$ spack install py-pyfaidx
$ spack load -r py-pyfaidx
$ faidx home/gjv11003/genomes/hg38_dm6_cat/hg38_dm6_cat.fa -i chromsizes > chr_sizes_hg38_dm6.txt
Traceback (most recent call last):
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-pyfaidx-0.5.5.2-t7rhiwg7gulcfuhv3ejj7kz427dpatmt/bin/faidx", line 6, in <module>
from pkg_resources import load_entry_point
ModuleNotFoundError: No module named 'pkg_resources'
```
If I correct for that error by adding the `'run'` dependency of `py-setuptools` I see that the package is missing the `six` dependency required by upstream's setup.py:
https://github.com/mdshw5/pyfaidx/blob/master/setup.py#L5
```python-traceback
Traceback (most recent call last):
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-pyfaidx-0.5.5.2-bsu2a67cknr7imh6xkj3yhqgw3lfkudg/bin/faidx", line 6, in <module>
from pkg_resources import load_entry_point
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3241, in <mo
dule>
@_call_aside
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3225, in _ca
ll_aside
f(*args, **kwargs)
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3254, in _in
itialize_master_working_set
working_set = WorkingSet._build_master()
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 583, in _bui
ld_master
ws.require(__requires__)
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 900, in requ
ire
needed = self.resolve(parse_requirements(requirements))
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 786, in reso
lve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'six' distribution was not found and is required by pyfaidx
```
### Platform and user environment
Please report your OS here:
```commandline
$ uname -a
Linux corelab2 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.3 LTS
```
|
1.0
|
Installation issue: faidx - `faidx` has multiple runtime errors. PR incoming.
### Steps to reproduce the issue
```console
$ spack install py-pyfaidx
$ spack load -r py-pyfaidx
$ faidx home/gjv11003/genomes/hg38_dm6_cat/hg38_dm6_cat.fa -i chromsizes > chr_sizes_hg38_dm6.txt
Traceback (most recent call last):
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-pyfaidx-0.5.5.2-t7rhiwg7gulcfuhv3ejj7kz427dpatmt/bin/faidx", line 6, in <module>
from pkg_resources import load_entry_point
ModuleNotFoundError: No module named 'pkg_resources'
```
If I correct for that error by adding the `'run'` dependency of `py-setuptools` I see that the package is missing the `six` dependency required by upstream's setup.py:
https://github.com/mdshw5/pyfaidx/blob/master/setup.py#L5
```python-traceback
Traceback (most recent call last):
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-pyfaidx-0.5.5.2-bsu2a67cknr7imh6xkj3yhqgw3lfkudg/bin/faidx", line 6, in <module>
from pkg_resources import load_entry_point
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3241, in <mo
dule>
@_call_aside
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3225, in _ca
ll_aside
f(*args, **kwargs)
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3254, in _in
itialize_master_working_set
working_set = WorkingSet._build_master()
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 583, in _bui
ld_master
ws.require(__requires__)
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 900, in requ
ire
needed = self.resolve(parse_requirements(requirements))
File "/opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.4.0/py-setuptools-41.0.1-6ll4vdwgfhpd5l7fhvu7wsct2mxvi5tz/lib/python2.7/site-packages/pkg_resources/__init__.py", line 786, in reso
lve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'six' distribution was not found and is required by pyfaidx
```
### Platform and user environment
Please report your OS here:
```commandline
$ uname -a
Linux corelab2 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.3 LTS
```
|
non_test
|
installation issue faidx faidx has multiple runtime errors pr incoming steps to reproduce the issue console spack install py pyfaidx spack load r py pyfaidx faidx home genomes cat cat fa i chromsizes chr sizes txt traceback most recent call last file opt spack opt spack linux gcc py pyfaidx bin faidx line in from pkg resources import load entry point modulenotfounderror no module named pkg resources if i correct for that error by adding the run dependency of py setuptools i see that the package is missing the six dependency required by upstream s setup py python traceback traceback most recent call last file opt spack opt spack linux gcc py pyfaidx bin faidx line in from pkg resources import load entry point file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in mo dule call aside file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in ca ll aside f args kwargs file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in in itialize master working set working set workingset build master file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in bui ld master ws require requires file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in requ ire needed self resolve parse requirements requirements file opt spack opt spack linux gcc py setuptools lib site packages pkg resources init py line in reso lve raise distributionnotfound req requirers pkg resources distributionnotfound the six distribution was not found and is required by pyfaidx platform and user environment please report your os here commandline uname a linux generic ubuntu smp tue jul utc gnu linux lsb release d description ubuntu lts
| 0
|
329,792
| 28,308,763,106
|
IssuesEvent
|
2023-04-10 13:36:56
|
AY2223S2-CS2103T-T17-2/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-T17-2/tp
|
closed
|
[PE-D][Tester B] Can have more helpful error messages
|
needClarify type.TesterBug
|

Perhaps more helpful error messages can be given when a command fails.
In this case John Doe does not exist in SudoHR, but it does not let the user add John Doe.
The user may have forgotten that someone with eid 7 already exists, and would not know why they couldn't add this person.
<!--session: 1680242670162-d22099f1-f38b-467a-bf12-878b18324cd9-->
<!--Version: Web v3.4.7-->
-------------
Labels: `type.FeatureFlaw` `severity.Low`
original: bokung/ped#8
|
1.0
|
[PE-D][Tester B] Can have more helpful error messages - 
Perhaps more helpful error messages can be given when a command fails.
In this case John Doe does not exist in SudoHR, but it does not let the user add John Doe.
The user may have forgotten that someone with eid 7 already exists, and would not know why they couldn't add this person.
<!--session: 1680242670162-d22099f1-f38b-467a-bf12-878b18324cd9-->
<!--Version: Web v3.4.7-->
-------------
Labels: `type.FeatureFlaw` `severity.Low`
original: bokung/ped#8
|
test
|
can have more helpful error messages perhaps more helpful error messages can be given when a command fails in this case john doe does not exist in sudohr but it does not let the user add john doe the user may have forgotten that someone with eid already exists and would not know why they couldn t add this person labels type featureflaw severity low original bokung ped
| 1
|
311,689
| 26,805,485,223
|
IssuesEvent
|
2023-02-01 17:57:43
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[DocDB] flaky test: TwoDCTestParams/TwoDCTestWithEnableIntentsReplication.TransactionStatusTableWithWrites/0
|
kind/bug kind/failing-test area/docdb priority/high
|
Jira Link: [DB-3641](https://yugabyte.atlassian.net/browse/DB-3641)
### Description
https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=TwoDCTestParams%2FTwoDCTestWithEnableIntentsReplication&fail_tag=all&name=TransactionStatusTableWithWrites%2F0&platform=linux
Seems flaky from when introduced.
|
1.0
|
[DocDB] flaky test: TwoDCTestParams/TwoDCTestWithEnableIntentsReplication.TransactionStatusTableWithWrites/0 - Jira Link: [DB-3641](https://yugabyte.atlassian.net/browse/DB-3641)
### Description
https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=TwoDCTestParams%2FTwoDCTestWithEnableIntentsReplication&fail_tag=all&name=TransactionStatusTableWithWrites%2F0&platform=linux
Seems flaky from when introduced.
|
test
|
flaky test twodctestparams twodctestwithenableintentsreplication transactionstatustablewithwrites jira link description seems flaky from when introduced
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.